The on-again, off-again nature of the work is not just the result of company culture; it stems from the cadence of AI development itself. People across the industry described the pattern. A model builder, like OpenAI or Anthropic, discovers that its model is weak on chemistry, so it pays a data vendor like Mercor or Scale AI to find chemists to make data. The chemists do tasks until there is a sufficient quantity for a batch to go back to the lab, and the job is paused until the lab sees how the data affects the model. Maybe the lab moves forward, but this time, it’s asking for a slightly different type of data. When the job resumes, the vendor discovers the new instructions make the tasks take longer, which means the cost estimate the vendor gave the lab is now wrong, which means the vendor cuts pay or tries to get workers to move faster. The new batch of data is delivered, and the job is paused once more. Maybe the lab changes its data requirements again, discovers it has enough data, and ends the project or decides to go with another vendor entirely. Maybe now the lab wants only organic chemists and everyone without the relevant background gets taken off the project. Next, it’s biology data that’s in demand, or architectural sketches, or K–12 syllabus design.
一息もつけない…子どもの付き添い入院 国の支援も進まず なぜ。业内人士推荐新收录的资料作为进阶阅读
Update: that follow-up post.,推荐阅读新收录的资料获取更多信息
국힘 지도부 ‘서울 안철수-경기 김은혜’ 출마 제안했다 거부당해