Over the past few years, the tech industry has gone from cushy landing pad for STEM grads to a cesspit of corporate greed, where grueling hours are commonplace, and layoffs could strike at any moment.
Unfortunately for employees of Alphabet, the parent company of Google, the squeeze is just getting started.
↫ Joe Wilkins at Futurism
Sergey Brin, one of the original co-founders of Google who seems to spend most of his time not working at Google, has sent out a company-wide memo demanding everyone working at Google puts in at least 60 hours a week, in the office, to work on “AI” that will eventually replace the very employees he’s demanding work 60 hours a week in the office. Mind you, this is the same Google that has just gone through several rounds of layoffs and made $26.3 billion in profit in a single quarter.
The goal, according to Brin, is for Google to be the first to create an “artificial general intelligence”, you know, that thing we used to call just “AI” until the Silicon Valley scammers got a hold of the term. There’s no indication anyone is even remotely close to anything even remotely related to “AGI”, and it’s highly unlikely the glorified autocomplete they are peddling today are anything more than a very expensive dead end to nowhere, but that’s not stopping him from working his employees to the bone.
At this point in time I feel like the big tech companies are racing towards a cliff, blinded by huge piles of investment money, deafened by each other’s hyperbolic claims and promises, while clueless politicians cheer them on. All of this is going to come crashing down in a spectacular fashion, and of course, the billionaires at the top won’t be the one suffering the consequences.
As is tradition.
This is irresponsible. I hope Googlers can find fulfilling careers elsewhere.
As an LLM skeptic and software engineer, though, there is some impressive stuff out there. It’s good to have an open mind. I recently got an agentic LLM to do quite a bit of useful busywork for me. It refactored and rewrote the way I requested with 95% accuracy. As someone who recently started a company, getting this work done for pennies is great. It’s not challenging, yet it still needs to be done. Also, you still have to know what you’re doing, of course. Even so, there’s more than nothing to this tech. Are LLMs worth as much as they’re hyped to be? No. But there’s definitely still something.
Anyway, just my $0.02. Thanks for keeping up with the news for all of us!
Thom Holwerda,
That’s not really accurate. Those of us in computer science circles have always understood AGI to be distinct from AI. AI is what beats you at specific tasks like chess, starcraft, jeopardy, etc. This has not changed. If you need to blame somebody for the blurred misconceptions then blame hollywood, not the tech industry.
I don’t say this to defend AI companies from legitimate criticism, of which there’s plenty to go around.
Personally I think that’s being too optimistic. Many experts see AGI as being decades away, which it may well be, but the truth is that AGI doesn’t need to be achieved for mass job displacement to occur. The biggest threat AI poses to us today is NOT that it becomes sentient (that may come later), but that so many of our jobs can be done without sentience. As expensive as it is to develop AI, what many people are forgetting is what’s even more expensive: human labor. Humans pointing the finger at AI firms for wasting money may not be making the connection here. While undoubtedly some of these AI investments will not pay off, it’s not those that fail that are a threat to us, it’s those that succeed. I think it’s too optimistic to conclude that 100% will fail. Disruptions are already afoot :-/
Completely agree with the diferentiation. I remember being on college and that A* algorithm or dynamic programming, or expert systems which in the end only apply rules was considered AI. I remember having conversations about whether hard AI would ever be possible, which is basically what today is called AGI. I remember my position was that both the human brain and an x86 CPU have the same computability, basically are equivalent as turing machines, so hard AI is possible, but it would require so much computation power and knowledge of how the brain works we wouldn’t see them in our lifetimes.
I remember how the turing test was basically impossible to pass, and I remember that chatbots sucked balls, for decades. So seeing what large language models are capable of doing today (and other AIs based on transformers like Dalle) I’m surprised how other people fail to see that this is a very rudimentary AGI already. Even if there is not a leap in capacity, this technology as it is will change the world. And I also fail to understand why there so much hate towards this amazing technology. I am really still trying to wrap my head around the fact that we have such an advanced technology on our hands.
What I think people don’t understand is we are not going to make an intelligence like human intelligence, but something different. So expecting it to fail the same way as humans fail, expect it to have feelings or purpose is not reasonable. It is already an intelligence because it has some level of abstraction, capacity of generalization, problem solving, etc. It is general enough, as you can ask it about a huge variety of themes. You can explain it the rules of a game and play a game with it and it will follow the rules, so sorry, but it is not just a glorified auto complete.
Lastly, I invite you all to search for a company called Figure AI which is working on a humanoid robot and see the videos they have posted of what they already have.. They are just merging together all the advances we have in computer vision, language generation and so on, and tell me that Figure 01 is just a “glorified autocomplete”.
Really, I don’t understand the hate, starting from the completely out of place critic of VLC the other day. Holwerda is exactly like the people who critisized the phonograph because it would finish live music, or the tv because it would finish cinema.
Will the employees get paid for those extra hours? This is why work-from-home is a disaster for employee rights (despite the convenience it offers), employers can simply assign work and deadlines to employees and don’t care how many hours it takes to complete. At least with work-from-office, there is grounds for complaint if most employees have to clock out late so they can finish the assigned work, with work-from-home, working hours are not traceable.
kurkosdr,
In the US, IT professionals who make $27.63/hr or more are exempted from employer overtime pay requirements. This “fact sheet” indicates that employers can also include a portion of bonuses/commission pay for this exemption purpose…
https://www.dol.gov/agencies/whd/fact-sheets/17a-overtime
IIRC Bill Gates personally went to congress in the 90s to lobby for computer employees to be exempted from overtime pay. At least the dollar limits went up, it used to be much lower when I was a full time employee – I never got paid a dime for overtime despite many 50-60 hour weeks.
So to answer your question, in 2024 the overtime cutoff for computer employees is $107k. Computer employees with higher salaries aren’t entitled to overtime pay but those with lower salaries are.
Edit: Everything I’m covering is US centric, I have no idea what overtime laws are like elsewhere.
I listen to CNBC every weekday, and the financial news involving the Ai fantasy is amusing to me.
I refuse to believe that people can be productive if they work 12 hours a day for extended periods. I think that kind of work culture will only foster the sort of presenteeism we see in toxic work cultures like Japan or South Korea. No extra work gets done and it only serves to destroy the mental health of the people forced in to doing it.
Jeeves,
You assume it’s about getting work done, but maybe it’s more about pleasing the boss. Regularly going home on time may leave you getting overlooked for promotion.
/only half joking
So the good news is, they’re still allowed to work from home 2 days a week (Sat + Sun)?
Once upon a time, Google mantra was “Don’t be Evil”
You know, From personal experience, I am starting to suspect the more some person/organization have to explicitly point out they are (or aren’t) something, in qualitative terms. The more likely that they are the opposite.
If you require your people to work 50% of extra hours, that may be an indication that Mr. Brin needs to hire 50% more people instead… But that’s expensive, right?
A nit-pick: It wasn’t company-wide. It was sent to a rather small fraction of the employees.