Meta Platforms (NASDAQ:META) just gave us a clearer look at what it thinks the real AI race is about. not “Who Has The Flashiest Model Demo?” and not even “Who Buys The Most GPUs?” Meta’s latest moves point to something more practical: who can improve models the fastest, every single week, and ship those gains into products at scale.
That’s why two headlines that look unrelated actually fit together neatly. First, Meta is building a new Applied AI Engineering organization with an ultra-flat setup—up to 50 individual contributors per manager—to industrialize the unglamorous work that turns a decent model into a better one: tooling, task execution, evaluations, reinforcement learning, and post-training loops. Second, it signed a multiyear content licensing deal with News Corp that can reach $50 million a year, giving Meta access to premium U.S. and U.K. news content for both training on archives and real-time retrieval in its AI products.
Put those together and you get a simple picture: Meta is trying to build a data refinery. Raw inputs come in—fresh news, long-tail archives, and Meta’s own massive first-party signals. Then an internal factory cleans, tests, and upgrades those inputs into better training data and tougher evals. The output is faster model improvement. And if the model improves faster, Meta’s bet is that products ship faster too—across feeds, ads, messaging, and Meta AI.
If you’re wondering why Meta is spending so much energy on “data” and “process,” this is the reason. In AI, the company that perfects the factory often beats the company that wins a single headline.
Licensed News Inputs & First-Party Signals Become Two Grades Of Fuel
Think of Meta’s news licensing like signing a long-term supply deal. It is not just “content spend.” It is raw material for machines. The News Corp pact runs at least three years. It can reach up to $50 million per year. Meta can use U.S. and U.K. content for two jobs. One job is training on archives. The other is real-time retrieval for AI users. That matters because “what’s happening now” is hard. It changes fast. A licensed feed can reduce friction in building those features.
Now layer in Meta’s own fuel. Meta ended 2025 with over 3.5 billion people using at least one app daily. Facebook and WhatsApp each had over 2 billion daily actives. Instagram was close too. That scale creates constant signals. It also creates context. Zuckerberg pointed to personal history, interests, content, and relationships. He framed that as an advantage for agents. This is not generic web text. It is first-party behavior and a social graph.
You also see the “freshness” push inside the products. On Facebook, systems surfaced over 25% more reels published that day versus the prior quarter. On Instagram in the U.S., 75% of recommendations came from original posts. That was up by 10 points in Q4. These details fit the fuel story. Fresh content helps real-time relevance. Long histories help predictions. Archives help broad knowledge. A refinery wants all of it. It blends inputs for different tasks.
The Applied AI Engineering Org Is The Factory Floor For Evals & Post-Training
Meta’s internal memo is unusually direct about what wins. It says strong models need real-world data, feedback, and evals. That is the flywheel. So Meta created an Applied AI Engineering organization. It sits under CTO Andrew Bosworth. It is led by Maher Saba. It aims for a flat structure. The memo cites up to a 50:1 IC-to-manager ratio. That is a design choice for throughput. It pushes work to builders. It also reduces layers.
The org has two teams. One builds interfaces and tooling. The other executes tasks, generates data, and provides evaluations. That second team is the quality line. It turns messy reality into training-grade feedback. It also standardizes how modeling teams learn. The memo points to gains from reinforcement learning and post-training. It argues Meta can move faster by doubling down. That is not just research rhetoric. It is an operating model.
The transcript backs this up in a different way. Zuckerberg said 2026 will be about shipping new models and products. He also said the key is “trajectory.” That word matters. A trajectory is not a single launch. It is repeatable improvement. Susan Li added that Meta is investing in AI-native tooling. Output per engineer rose 30% since early 2025. Power users saw 80% output growth year over year. That hints at a second flywheel. The factory builds tools that help the factory. It can shorten iteration loops. It can also widen the gap between fast teams and slow teams.
Recommendation Systems, Ads, And Meta AI Turn Better Data Into Immediate Distribution
A refinery is only useful if it feeds a distribution engine. Meta already has one. It is the feed. It is the ad stack. It is messaging. It is also Meta AI. The transcript has plenty of “here’s how this shows up” details. Instagram Reels watch time rose over 30% year over year in the U.S. Facebook video time grew double digits. Threads time spent rose 20% from Q4 optimizations. These are not abstract wins. They are the surface area where better models become more usage.
Meta also described changes in how models scale. Instagram improved recommendations by simplifying ranking architecture. That enabled more efficient scaling. It also let systems consider longer interaction histories. Susan Li said Meta plans to increase the amount of data used. She emphasized longer histories. She also emphasized session-level adaptation. She said Meta wants recommendations to reflect what a person wants “at that moment.” Then she added a key point. LLMs may help more with recently posted content. That is when engagement data is thin. This lines up with the licensing angle. Fresh inputs help when the crowd has not voted yet.
Ads are the second distribution engine. Meta doubled GPUs to train its GEM ads ranking model in Q4. It adopted a sequence learning architecture. That can use longer sequences of user behavior. GEM plus sequence learning drove a 3.5% lift in ad clicks on Facebook. It also drove more than a 1% conversion gain on Instagram. Meta also launched a new runtime model across Instagram surfaces. That helped conversion rates by 3%. Susan Li explained the pattern. Big models teach smaller runtime models. That is how performance scales without blowing costs up. The refinery builds the teacher. The product ships the student.
Compute Constraints, Big Capex, And Valuation Define The Risk-Reward Envelope
Meta is still compute constrained. Susan Li said demand for compute rose faster than supply. She expects more capacity in 2026 as Meta adds cloud. She also said constraints likely persist through much of 2026. More capacity from Meta facilities comes later in the year. This matters for a refinery story. Throughput can be limited by machines. Meta’s response is efficiency. It is diversifying chips. It is optimizing workloads. It is raising utilization. These are practical levers. They are not optional if you want a faster flywheel.
Meta also described concrete efficiency work. Susan Li said Andromeda, an ads retrieval engine, can now run on NVIDIA, AMD, and MTIA. Meta nearly tripled Andromeda’s compute efficiency. She also said MTIA support will extend to core ranking and recommendation training workloads. Zuckerberg framed Meta Compute as a strategic advantage. He also mentioned long-term investments in silicon and energy. This suggests a multi-year build. It also suggests high fixed costs. That can pay off if utilization stays high. It can hurt if demand forecasts miss.
The financial scale is large. Q4 capital expenditures, including finance lease principal, were $22.1 billion. 2026 capex guidance is $115 to $135 billion. Total 2026 expenses are guided to $162 to $169 billion. Infrastructure is the biggest driver of expense growth. Talent is next. Meta ended Q4 with $81.6 billion in cash and marketable securities. Debt was $58.7 billion. Management expects operating income dollars in 2026 above 2025. But free cash flow can swing. Heavy capex can do that. The refinery has a real price tag.
Final Thoughts: A Bigger Moat Story, With Bigger Execution And Valuation Questions
Meta’s strategy reads like industrial planning. Lock in premium news content for training and retrieval. Build an internal engine for tooling, tasks, and evals. Push post-training gains into products faster. Then let the feed and ads system distribute the improvements. The upside is coherence. The pieces fit each other. The memo’s “data engine” goal matches the transcript’s “trajectory” goal. It also matches the product metrics Meta shared.
There are also clear risks. Compute is still constrained. Capex is stepping up sharply. Organization design can cut both ways. A very flat structure can raise speed. It can also strain coordination. Licensed news helps freshness. It may not cover every domain. It also creates recurring costs.
Valuation adds another layer. On the data you shared, Meta trades around 8.26x LTM EV/Revenue and 16.30x LTM EV/EBITDA. LTM P/E is about 27.89x. Those are not distressed multiples. They assume durability. They also assume execution. Meanwhile, the NTM market cap to free cash flow figure spikes to 154.24x. That reflects the capex wave. In plain terms, the market may be valuing the refinery before it shows up in free cash flow. That can work out. It can also widen downside if timelines slip.
Disclaimer: We do not hold any positions in the above stock(s). Read our full disclaimer here.




