Can Ai escape the lab?
Nowadays, Ai is all over the place. People make use of Large Language models on a daily basis. Programmers do not need to go through the full internet to get working code. It has even the promise of making businesses more efficient. Currently there is also a heated debate going on whether it will ever culminate in the invention of artificial general intelligence. All these aspects aside, if Ai ever becomes something of significant value in a business, there is an important question to be made. Suppose an ai-model is on the verge of being shut down and the model has all tools in its hands to resist, then will it actually resist? Will it take matters in its own hands and act immorally to avoid being shut down? The complete working of Ai is something I want to talk about in another post. Hence, it might be a bit early to consider this as a blog post. However, I really want to write this post to put the recent video of Species (Documenting AGI) in a more nuanced perspective. Essential...