Recently people quite far from IT asked me: what is AI? I answered something, but realized it wasn’t clear enough. So I’ll write it better here. This will be useful for technical specialists too, as understanding of features doesn’t always come immediately from knowledge of internal structure.
By AI in this context we’ll understand ChatGPT, GigaChat and other LLMs (large language models).
How it works
The idea is quite simple: let’s repeat the brain, even if in a more simplified mode. It’s clear that a program differs from a brain as an airplane differs from a bird: there’s something in common, but no one flaps wings.
There are artificial neurons. They’re arranged in layers. First, text is converted to numbers and goes to the first layer. After multiplications by some coefficients, data goes to the second and so on. Training – is exactly changing multiplication coefficients in each neuron.
AI is trained conditionally on the internet, i.e., on some not very verified data. The whole point is that LLMs try to continue the text they received as input, write the most probable continuations. On a small number of neurons you can see that there’s no magic and they write quite poorly, but on a large one it turns out quite plausible.
AI works during a query, it doesn’t “think” between queries. Usually the current conversation dialog goes as input, as AI doesn’t remember anything itself. And by the dialog (context) it writes a new message from itself.
AI, unlike animals, doesn’t learn during work: nothing new appeared in it from the previous query. Maybe this will be fixed a bit and some data context between queries will appear to know user preferences, but this still isn’t learning. AI training – is a completely separate process, usually this is called a new version.
Algorithms aren’t built into AI. And there’s an element of randomness. This is well visible on simple mathematical examples: sometimes multiplies correctly, sometimes not.
Some conclusions
- don’t pay attention to AI tone/emotions, as they don’t express anything. “Yes, you’re right” etc. – just skip, even better to relate emotionally negatively, so such phrases don’t lull vigilance.
- due to training features, AI has a certain American point of view, which is bad from cultural and political perspectives (even ours often have this, as they’re retrained, not trained from scratch usually)
- better not to use AI as a data source, but use as a processor: give a rule and give all data, and it processes. Why? They’re, of course, trained on something, but data can be inaccurate and usually outdated (at least by a year, or more). They invented MCP – this is a way for AI to get data as a function call, but you need to connect and configure this separately.
- AI hallucinates: they don’t have a database inside, they don’t exactly know what they know. So they can make up even weather when you ask them, though they may have no data on this city. They were separately trained to answer more correctly about weather, but for some other cases maybe not yet. This leads to the fact that everything made by AI must be double-checked: facts, logic, etc. Only the person using AI is responsible for the result.
- it’s extremely difficult to build unambiguous algorithms based on AI: usually for one query it would be good to get several answer options
How to use
- Most convenient in text editors like VS Code, as this way you can accumulate knowledge and rules on the necessary topic, and AI only helps work with text.
- You need to consider AI as an intern: they can help with something, but you definitely need to double-check behind them.
In programming, LLMs fit well:
- there’s multiple double-checking of generated code: both the programmer themselves, and tests, and accepting tester
- quite well trained specifically for programming
Areas of AI (not just LLM) that are well developed now
- speech generation from text and vice versa
- translation (it’s worse than professional, but better than nothing)
- song generation (and lyrics for them)
- image and video generation from other texts, images, and videos
- programming
- text work: summarization, style change, etc.
- designers of all kinds (from websites to house plan design) – in general, due to low quality of people in this sphere (separate question why it turned out this way and the same interior designers often don’t even know some norms, not to mention thinking about people’s convenience)
In other words, you can use where you don’t understand anything and would have to hire low-professional people (make a logo for a project, a song for some occasion for a narrow company, translation from some Chinese or African language, etc.). Or a professional can for noticeable acceleration instead of interns/apprentices: write tests in programming or figure out and describe the cause of an error, make many draft design options, etc.
A good association is also like with a pen pal: you write what you think, and AI answers. It’s not at all necessary to agree, but even just by arguing (in your mind) you come to certain conclusions and advance in reasoning. In other words, something like psychotherapy: it’s not particularly important what they answer. Plus, in the answer you can see a point you didn’t initially think about.
What AI can’t be used for
- learning, as then there will be no learning for a person, but on the contrary dependence and brain atrophy
- strategic documents, as AI will write an averaged opinion from the internet. Unlikely everything is so bad that it makes sense to invent such documents.
- political documents, as there’s unlikely confidence that AI’s political opinion matches yours
- don’t forget that most often AI is cloud, and this is a certain level of privacy threats and attacks on groups and individuals.
- AI has everything very bad with security: if you give them to read some sites on the internet, they can’t distinguish user commands from site data: for them this is all part of one query. Therefore, all data must be trusted
About the future (what’s already clear)
AI significantly speeds up certain things: image generation, finding errors in code, etc.
This will lead to professionals being able to perform a larger volume of work, and fewer interns will be needed. Accordingly, it will be harder to become a professional.
Worst will be for translators, artists, musicians, etc.: there will be much fewer cheap orders.
So far no revolution is visible: yes, somewhere efficiency increases, but it constantly increases.
A separate leap will be related to robots:
- self-driving cars
- loaders / sorters
- auto-cleaners
- auto-barbers
- auto-cooks
but when this will happen is unclear yet. Now everything is in very raw form: both doesn’t work well, and too expensive.