Models enabling on-device AI: Similar to the trend of smaller, more use-case specific models, hybrid AI/on-device AI hopes to offer scalability, performance, and personalization at reduced costs, with the added benefit of on-premise/ondevice security. If model size and prompt (i.e., question) are below a certain threshold, then inference can run on a device (currently estimated to be >1 billion parameters, though models with >10 billion are slated to work eventually). Larger models can use a hybrid approach to work across devices and the cloud. Bringing Generative AI capabilities closer to the source can also enable per-user alignment and tuning. As models become more user-case specific, the added benefit of a model that is on-device means that the model can run and train locally without exposing data to hyperscalers.