Skip to content

Scenario: Mandatory AI Participation — AI Week, Hackathons, and "Embrace AI" Campaigns

The company announces an all-hands AI Innovation Week / AI Hackathon / internal AI competition. The stated purpose is "learning and exploration." The actual purpose may be: testing which jobs can be replaced by AI, collecting employees' workflows and judgment logic, and manufacturing internal evidence for the "AI efficiency" narrative.


Real Cases

TCS (2025): 281,000 employees participated in a global AI Hackathon spanning 58 countries. Official framing: "accelerating an AI-first culture" and "having employees meaningfully integrate AI into daily work."

SimCorp (February 2026): Partnered with Microsoft. 2,000 employees in an AI Hackathon. Challenge categories included "AI for operational efficiency" and "accelerating onboarding" — that second one reads between the lines as: use AI to flatten the learning curve for new hires, thereby reducing dependence on experienced people.

General pattern: Company organizes a 1-5 day AI event, requiring employees to use AI tools to replicate their own daily work. The best results get showcased, get publicized, and become evidence for the company's "successful AI transformation."


What It's Actually Doing

AI Week is a low-resistance knowledge extraction mechanism. It's more covert than directly asking you to write documentation, because it's dressed up as "learning" and "innovation."

Break down its real functions:

Surface FunctionActual Function
"Help employees learn AI tools"Test how much AI can replicate employees' work
"Explore AI possibilities"Collect real workflows and judgment logic as AI training material
"Spark innovation"Get employees to prove which parts of their own jobs can be automated
"Showcase excellent results"Produce internal evidence for "AI replaces headcount" to support management layoff decisions

The most elegant part: employees participate voluntarily, even enthusiastically. When you demo "I used AI to compress a week of work into two hours" at the hackathon, you think you're showcasing your AI skills. Management sees: this position might only need 1/5 of the people.


Strategy

Attend. Don't boycott.

Skipping gets you flagged as "resistant to change" or "anti-innovation." At many companies, AI Week participation is treated as an attitude signal — the cost of not showing up far outweighs the cost of showing up.

Choose What You Demo

This is your real control point. You can't not attend, but you can choose what you build at the hackathon.

Recommended:

  • Use AI to automate the boring, repetitive, low-value parts of your work — log analysis, boilerplate code generation, document formatting
  • This work will get automated sooner or later. You doing it proactively doesn't add extra risk
  • Demoing these results makes you look proactive, AI-capable. Management is satisfied

Avoid:

  • Don't use AI to replicate your core decision-making processes — "I used AI to automate architecture reviews" sounds impressive, but you're telling management: architecture reviews don't need you anymore
  • Don't demo AI replacing your coordination and communication work with other teams — automating this has extremely high value to the company (high labor cost). Once feasibility is demonstrated, your position is at risk
  • Don't expose your deep judgment logic in prompts — some companies audit AI tool usage logs

Control the Narrative

If your hackathon result needs a presentation or writeup, watch how you describe it:

Dangerous framing: "I used AI to finish my usual week of work in just two hours." → Management translation: "This person's position can be cut by 80%."

Safer framing: "I used AI to handle the data cleaning and format conversion, which freed me up to focus more on architecture design and cross-team coordination." → Management translation: "AI is a tool, but the core work still needs a human."

The difference isn't in what you did. It's in how you tell the story of what you did.


When AI Week Results Are Expected to Continue

Some companies won't let hackathon results stay at the demo stage. They'll say: "That automation workflow you built at AI Week was great — can you turn it into a permanent tool?"

At that point you're no longer dealing with a one-time event. You're facing ongoing knowledge transfer. Fall back to the strategy framework from the knowledge extraction scenario: deliver the how, protect the why.

Build the automation tool, sure. But make it handle the parts you choose — standardized, repetitive, with boundaries you defined. The parts that actually need judgment stay behind "requires manual review" or "special cases need hands-on handling" by design.


AI Usage Rate as a Performance Metric

Some companies are starting to fold "AI tool usage rate" into performance reviews — how often you used Copilot, your AI code acceptance rate, your activity level on the internal AI platform.

This is a bullshit metric, but it's being taken seriously.

How to deal with it:

  • Keep your visible AI usage frequency up — let Copilot write your tests, generate boilerplate, auto-complete routine code. This pumps up your "usage" numbers
  • Use your own judgment on core decisions, then let AI contribute numbers where it doesn't matter
  • You're managing a metric, same as you manage any other KPI — spend the minimum effort to hit the passing bar

Data reference: GitHub Copilot's suggestion acceptance rate is roughly 30%. If your rate is significantly above or below that number, it might draw attention. Stay within the normal range.

Released under CC BY-SA 4.0.