Thoughts on AI from the internet
Nevertheless, we are not alone in intuitive feeling, that general AI is still far from us. Learning based on known data, human behaviour and cloning of human texts will not lead to general AI. So, there is a question that people in the field have been thinking about for a long time: What do we need to put together in addition to a language model for the system to create a general AI? This will be next mission, and that brings us to the end of the ChatGPT hype. We have deep learning neural networks and see that complex algorithms work well in well-defined, stable worlds with large amounts of data available. The basic problem is relatively stable, well defined and does not change. ChatGPT, and similar chatbots, are therefore experts in this abstract world: they can generate a series of symbols, but this is not at all the same as understanding in the human sense. Human intelligence deals with ill-defined, rapidly changing situations and unstable worlds with a high degree of uncertainty.
AI researchers, as well as some AI applications, are engaged in worlds characterized by uncertainty. When modelling phenomena from the real world with objective to make basis for software implementations of intelligent systems, incomplete or uncertain information may be encountered. As such information hinders a more structured approach to modelling, it is natural to deal with uncertainty through some of graded logics, one of them is fuzzy logic. In machine learning, according to types of system feedback, there are the three main types of learning: unsupervised learning, reinforcement learning and supervised learning.
- In unsupervised learning the system learns patterns in the input even though no explicit feedback is supplied. The most common unsupervised learning task is clustering: develop concepts without given labelled examples by a teacher. It seems that unsupervised learning is closer to human reasoning than the next two approaches.
- In reinforcement learning the AI system learns from a series of reinforcements–rewards or punishments.
- In supervised learning the AI system observes some example input–output pairs and learns a function that maps from input to output, some kind of a teacher exists. That kind of learning is used in neural networks, deep learning, and is one of parts of generative AI. In reality, these distinction are not always so crisp.
This promise of a world responsibly empowered by AI continues to drive our work at Google Deep Mind. For a long time, we’ve wanted to build a new generation of AI models, inspired by the way people understand and interact with the world. AI that feels less like a smart piece of software and more like something useful and intuitive - an expert helper or assistant.
Gemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research. It was built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image and video.
Gemini is also our most flexible model yet - able to efficiently run on everything from data centers to mobile devices. Its state-of-the-art capabilities will significantly enhance the way developers and enterprise customers build and scale with AI.
We’ve optimized Gemini 1.0, our first version, for three different sizes:
- Gemini Ultra - our largest and most capable model for highly complex tasks
- Gemini Pro - our best model for scaling across a wide range of tasks
- Gemini Nano - our most efficient model for on-device tasks
Link:
https://blog.google/technology/ai/google-gemini-ai/#introducing-gemini