Artificial Intelligence, blockchain, Machine Learning

Do we have to fear machine learning or AI?

Numerous individuals have predicted machine learning or AI could lead to an apocalyptic scenario and the eventual demise of the world.
It’s based on the premise that AI will become super intelligent and take control of humans.

But can we define superintelligence? Does any such thing exist?

We attain intelligence through experimentation and data. To predict something accurately, we need lots of variables, so increase in computation. There is no evidence that the rules of physics or the rules of the universe can be broken. So, AI running on the hardware of the universe can’t break the laws of physics. For example, even AI will take thousands of years to crack a secure cryptography with current computing power. Maybe with future quantum computers, it can be done easily, but quantum-safe cryptography still exists.
Weather forecast still require huge amount of computation and data, AI can’t predict weather by scratch.

In the world of internet we can’t know what is real and unreal, with emergence of deep fakes.

In this case, I don’t think it’s the AI that is creating the problem. It’s the big tech social media platforms that maintain control of the algorithms and amplify propaganda, junk information, and viral content for profit.

With better moderation tools and a governance system for apps, it’s possible to tackle disinformation. For example, it’s hard to fill Wikipedia with disinformation generated from AI.

Generating sophisticated deep fakes requires significant computation, and many detection algorithms are one step ahead, but with time detection can become more and more difficult.

You can look at discussion of deepfake in crypto stackexchange:

Cryptography to tackle deepfake, proving the photo is original

crypto.stackexchange.com

Deepfake technology has become very difficult to tackle due to sophisticated machine learning algorithms. Now, even when a journalist or bystander provides photo or video evidence, the culprit denies it, claiming that it is the result of deepfake manipulation. Can TEE (Trusted Execution Environment) cryptography technology, like SGX, be used to validate whether a photo is original, taken directly from a camera, and free from any manipulation? This would ensure that the culprit cannot deny the authenticity of the photo. Does it require separate camera hardware, or can the right piece of software alone accomplish this? We can provide these special tools for journalists, etc., to decrease the harm caused by deepfake.

Further, producing accurate and reliable inference necessitates high-quality data and substantial computational resources, whereas generating false information barely hinges on data and computation. Good AI can predict false inferences.

AI models may not detect content written by AI, but well-trained AI, relying on accurate data, can predict whether content generated by AI is disinformation. Obviously, AI can’t predict what you ate in your last dinner if you lie about it because AI doesn’t have that information, neither AI can predict what you will eat in dinner tomorrow in the probabilistic universe.

Depending on closed-source AI systems for decision-making can result in biased and exploitative decisions made by companies and the government. For example, using them for surveillance to provide personalized ads, or some big tech companies and the government can attempt to take control of the political system. It’s better to use locally open-source AI models to make predictions from your data.

There are also dangers associated with governments using AI to automate their military capabilities for mass killing, genocide and warfare. Implementing better democratic structures, designs, and international laws can help address such issues.

Some of the dangers associated with AI include the creation of atom bombs, bioweapons, and the escalation of cyber-attacks. Although there are obstacles in obtaining the necessary knowledge, raw materials, and equipment for such attacks, these barriers are diminishing, potentially accelerated by advancements in AI.

It is essential to note that the decrease in these barriers is not solely due to AI but is rather a result of advancements in other technologies. For example, a graduate biology student can build a virus with access to technologies such as DNA printers, chemical reagents for DNA mutation, NGS, etc.

AI can’t create perpetual machines through its intelligence; it will consume energy and natural resources to function. Therefore, it needs to be used efficiently, only when necessary. Additionally, it cannot fully replace human labor.

AI, or artificial intelligence, operates as a statistical model, meaning that it relies on patterns and probabilities rather than providing deterministic results. Due to its statistical nature, errors are inherent in its functioning, and complete precision cannot be guaranteed. It is a tool that excels in tasks governed by well-defined protocols.

To illustrate, consider the analogy of cooking. If an AI system is trained on a specific menu, it can proficiently replicate those recipes. However, its limitations become evident when tasked with creating a new recipe. In such cases, there is no assurance that the outcome will be palatable.

Moreover, it’s essential to recognize that AI doesn’t possess the ability to think or make decisions in the way humans do. Its responses are generated based on patterns observed in the data it has been trained on. Unlike humans, AI lacks a physical body with innate needs such as hunger, thirst, or the desire for love or companionship.

Consequently, its outputs are based on the information contained in human-written data of human experiences. It cannot independently seek or comprehend fundamental human experiences.

AI can’t fight for your privacy, women’s rights, LGBTQ rights, disabled people, workers rights or climate change because they are not built with the same structure as humans and can’t feel like humans. They don’t have any evolutionary goals.

If we were to simulate either our brain or our entire body, would it behave exactly like us?

No, as it violates the principle of form following function. A robot equipped with a simulated brain may replicate sensations like hunger, even that with an approximation, but it cannot consume actual food to satisfy that hunger or drink water to quench its thirst. The interaction with the environment will inevitably differ, leading to decisions that deviate from human decision-making processes.

Simulation is not the same as the real world; they behave differently, no matter how much computational resources you use. It cannot capture the full complexity of real situations. It’s like attempting to feed the entire universe into a computer. Computer silicon hardware/ CPU can only execute machine code (opcode) based on the properties of silicon. Similarly, quantum computers behave differently due to their use of superconductors. To replicate the properties of water entirely, you need water itself; no simulation can achieve this. Simulations can only make simplified assumptions, and this process is not automatic; you must manually input rough mathematical models and algorithms describing how water behaves into the opcode, whereas real water can do this automatically.

Take for example Molecular dynamics simulation:

Unfortunately, the calculations required to describe the absurd quantum-mechanical motions and chemical reactions of large molecular systems are often too complex and computationally intensive for even the best supercomputers. Molecular dynamics (MD) simulations, first developed in the late 1970s , seek to overcome this limitation by using simple approximations based on Newtonian physics to simulate atomic motions, thus reducing the computational complexity.

These successes aside, the utility of molecular dynamics simulations is still limited by two principal challenges: the force fields used require further refinement, and high computational demands prohibit routine simulations greater than a microsecond in length, leading in many cases to an inadequate sampling of conformational states. As an example of these high computational demands, consider that a one-microsecond simulation of a relatively small system (approximately 25,000 atoms) running on 24 processors takes several months to complete.

Humans cannot entirely rely on AI for decision-making due to its limitations; it can only serve as an assistant.

Reputed AI models like ChatGPT and a open source model like HuggingFace’s Chat have the potential to make the world more truthful by providing a single source of truth trained on high-quality academic information.

Why Do People Believe the Earth Is Flat?

http://web.archive.org/web/20230802193056/https://nautil.us/why-do-people-believe-the-earth-is-flat-305667/

So there is a chunk of Flat-Earth believers who brand themselves as the only true skeptics alive. (“No, I will not believe anything that I cannot test myself.”) There are many things that are very difficult to test. It sometimes takes a certain amount of skill, or knowledge of mathematics, to be able to conclusively prove some things. Even people who dedicated their lives entirely to science have only so much time. Most of what we take as empirically falsifiable scientific truth we cannot falsify ourselves.

Let’s set aside the realm of deep fakes, which involve the manipulation of celebrities’ photos. Instead, consider how one can trust an infographic or news article crafted by a journalist or scientist. Ultimately, it boils down to placing trust in institutions. Institutions with strong governance, ethical individuals, and well-designed incentives foster trust. Conversely, poorly governed institutions erode that trust.

Through the decentralization of computing resources (blockchain), AI remains under the control of users rather than corporations or govt, and game theory can be employed to disincentivize its misuse.

https://iambrainstorming.wordpress.com/2023/09/01/what-do-we-need-to-decentralize-in-the-coming-years/

Here, is how we can stop AI from misuse, given by chatgpt:

Preventing the misuse of AI involves a combination of technical, ethical, and regulatory measures. Here are some steps that can be taken to address AI misuse:

  1. Ethical Guidelines and Regulation: Governments and organizations can establish clear ethical guidelines and regulations for the development, deployment, and use of AI technologies. These guidelines should address issues such as bias, privacy, security, and potential harm.
  2. Transparency and Accountability: AI systems should be designed with transparency in mind. Developers should provide explanations for AI decisions, making the decision-making process understandable and traceable. Accountability mechanisms should be in place to hold individuals and organizations responsible for AI misuse.
  3. Bias Mitigation: Developers should actively work to identify and mitigate biases in AI systems. Bias can lead to unfair or discriminatory outcomes. Regular audits and assessments of AI systems can help identify and rectify bias issues.
  4. User Education: Educating users about the capabilities and limitations of AI can help prevent its misuse. Users should be aware of the potential for AI-generated content to be manipulated or used for misinformation.
  5. Oversight and Review: Establish mechanisms for independent oversight and review of AI systems. This could involve third-party audits or regulatory bodies that assess the ethical and legal implications of AI applications.
  6. Collaborative Efforts: Governments, industry stakeholders, researchers, and civil society organizations should collaborate to establish norms, standards, and best practices for AI development and usage.
  7. Whistleblower Protections: Encourage individuals within organizations to report concerns about AI misuse without fear of retaliation. Whistleblower protections can help expose unethical practices.
  8. Continuous Research: Ongoing research in AI ethics and safety is essential to stay ahead of potential misuse scenarios. Researchers can develop techniques to detect and counteract AI-generated misinformation, deepfakes, and other harmful content.
  9. Global Cooperation: Given that AI has a global impact, international collaboration is crucial. Countries can work together to develop harmonized regulations and share best practices.
  10. Responsible Innovation: Tech companies and AI researchers should consider the ethical implications of their work from the outset and prioritize the development of AI that aligns with societal values.

Preventing AI misuse requires a multifaceted approach involving technology, policy, education, and ethical considerations. It’s an ongoing challenge that requires vigilance and adaptation as AI technology evolves.