Friday, January 02, 2026

The Age of AI: Redefining What It Means to Be Human

When viewed through the long lens of history, the current explosion of AI technology is not an isolated anomaly.

  • When the steam engine arrived, carriage drivers panicked.

  • When electricity became common, the lamplighters vanished.

  • When computers entered the office, clerks were replaced.

History has proven time and again: Technology does not phase out "people"; it phases out fixed roles in the division of labor. The Age of AI is no different. However, this time, the change is faster, deeper, and touches more directly upon the very essence of "being human."


I. AI is Not an Option—It is the Background

AI is no longer a question of "to learn or not to learn." It is about realizing you are already immersed in it.

Specifically, the generations born between 1970 and the early 2000s find themselves in a precarious position: they spent the first half of their lives painstakingly accumulating experience, only to face a reality where that experience is rapidly devaluing. This isn't a matter of personal effort; it is a generational shift in technology.

AI does not ask for permission. It won't wait for society to be ready, nor will it provide a buffer zone for individuals. Like electricity or the internet, once it becomes infrastructure, it accelerates until it becomes the environment itself.

The real danger is not that AI is too strong, but that humans are still using "old maps" to navigate a "new continent."


II. AI Does Not "Replace Humans"—It Amplifies Choices

A common misconception is that AI will replace people. A more accurate statement is: AI replaces roles of "execution without judgment."

  • The automobile didn't end human travel; it ended the carriage as the only option.

  • The calculator didn't make math obsolete; it freed humans from wasting energy on repetitive computation.

AI follows the same logic. It exponentially enhances our ability to calculate, deduce, generate, and retrieve—but it does not decide where to go. Direction remains a human prerogative. AI can calculate the probability, cost, and efficiency of every path, but it cannot tell you which path is worth walking for a lifetime. Our true role is not to "calculate faster," but to judge what is worth calculating.


III. AI Leads in Expertise, But Stumbles in "Understanding"

We must face a hard truth: in almost every field of professional knowledge, AI has already surpassed the average human.

Whether it is medicine, law, coding, finance, or linguistics—in terms of breadth of knowledge, update speed, and consistency of output—AI is a tireless expert that never forgets. To deny this is mere self-consolation.

However, its weaknesses are equally glaring:

  • It understands statistical correlation, not meaning.

  • It generates formally correct results, not value judgments.

  • It excels at "looking like" something, without truly knowing "what it is."

AI can mimic style but cannot bear responsibility. It can synthesize creativity but does not understand the sacrifice involved. It can provide answers but does not suffer the consequences. AI has no worldview and no life story. Understanding the world, owning one's choices, and bearing the results—these are the core of what it means to be human.


IV. The Growing Pains are Real: A Temporary Loss of Productivity Roles

We must honestly face a harsh reality: as AI permeates society, many jobs will lose their economic significance in a very short time.

This isn't because people aren't working hard enough; it’s because the speed of technological transition has, for the first time, outpaced the speed of individual adaptation. We will see a "fault line" in our social structure: some will upgrade quickly, while others are ejected from the old system entirely. In the long run, humanity may experience a contraction of traditional roles. As productivity skyrockets, the demand for "raw labor" drops. Society will be forced to redefine work, value, and distribution. While future milestones—space colonization, interstellar resources, and galactic civilization—may lead to a new era of expansion, we must first survive this period of intense self-restructuring.


V. So, How Should We Respond?

The answer is simple in concept, though difficult in execution:

  1. Do what AI cannot. AI struggles with true understanding, complex ethical judgment, cross-disciplinary synthesis of meaning, and the building of trust. It cannot truly empathize. Focus on decisions that require "skin in the game" and problems that are ambiguous and have no standard answer.

  2. Treat AI as an amplifier, not an opponent. The most competitive people of the future won't be those who avoid AI, but those who harness it to perform higher-order thinking. The person who knows how to ask is more important than the one who knows how to answer. Defining the problem is now more valuable than solving it.

  3. Redefine "Learning." Learning is no longer about memorizing information; it is about building frameworks for judgment, abstraction, and the ability to transfer skills between fields. It’s not about "what I know," but "how quickly I can understand, reorganize, and create."


An Upgrade, Not an Ending

Technology will neither save nor destroy humanity. It will merely amplify who we already are. If we grow accustomed to dependence, avoidance, and intellectual lethargy, AI will make that state absolute. If we insist on understanding, judging, creating, and taking responsibility, AI will become an unprecedented catalyst for our potential.

The real question has never been: "What will AI turn us into?" But rather: "In the face of AI, are we still willing to do the hard work of being human?"

Wednesday, July 30, 2025

AI Safety: The Final Threshold

As artificial intelligence continues to make breakthroughs in multimodal perception, logical reasoning, natural language processing, and intelligent control, human society is rapidly entering an era dominated by pervasive intelligent systems. From cognition-driven LLMs (large language models) to embodied intelligence in autonomous vehicles and robotic agents, AI is evolving from a tool for information processing to an autonomous actor with real-world agency. In this transformation, AI safety has moved from a marginal academic topic to a structural imperative for the continuity of civilization.


Technological Singularity Approaches: Systemic Risks at the Safety Threshold

Today’s AI systems are no longer confined to closed tasks or static data—they exhibit:

  • Adaptivity: Online learning and real-time policy updating in dynamic environments;

  • High-Degree Autonomy: The ability to make high-stakes, high-impact decisions with minimal or no human supervision;

  • Cross-Modal Sensorimotor Integration: Fusing visual, auditory, textual, and sensor inputs to drive both mechanical actuators and digital infrastructure.

Under these conditions, AI failures no longer mean simple system bugs—they imply potential cascading disasters. Examples:

  • A slight misalignment in model objectives may cause fleets of autonomous vehicles to behave erratically, paralyzing urban transport;

  • Misguided optimization in grid control systems could destabilize frequency balance, leading to blackouts;

  • Medical LLMs might issue misleading diagnostics, resulting in mass misdiagnosis or treatment errors.

Such risks possess three dangerous properties:

  1. Opacity: Root causes are often buried in training data distributions or loss function formulations, evading detection by standard tests;

  2. Amplification: Public APIs and model integration accelerate the propagation of faulty outputs;

  3. Irreversibility: Once AI interfaces with critical physical infrastructure, even minor errors can lead to irreversible outcomes with cascading societal effects.

This marks a critical inflection point where technological uncertainty rapidly amplifies systemic fragility.


The Governance Divide: A False Binary of Open vs. Closed Models

A central point of contention in AI safety is whether models should be open-source and transparent, or closed-source and tightly controlled. This debate reflects not only technical trade-offs, but also fundamentally divergent governance philosophies.

Open-Source Models: The Illusion of Collective Oversight

Proponents of open-source AI argue that transparency enables broader community scrutiny, drawing parallels to the success of open systems like Linux or TLS protocols.

However, foundational AI models differ profoundly:

  • Interpretability Limits: Transformer-based architectures exhibit nonlinear, high-dimensional reasoning paths that even experts cannot reliably trace;

  • Unbounded Input Space: Open-sourcing models doesn’t ensure exhaustive adversarial testing or safety guarantees;

  • Externalities and Incentives: Even if some community members identify safety issues, there's no institutional mechanism to mandate fixes or coordinate responses.

Historical examples such as the multi-year undetected Heartbleed vulnerability in OpenSSL underscore that “open” is not synonymous with “secure.” AI models are orders of magnitude more complex and behaviorally opaque than traditional software systems.

Closed Models: Isolated Systems under Commercial Incentives

Advocates for closed models argue that proprietary systems can compete for robustness, creating redundancy: if one model fails, others can compensate. This vision relies on two fragile assumptions:

  1. Error Independence: In practice, today’s models overwhelmingly rely on similar data, architectures (e.g., transformers), and optimization paradigms (e.g., RLHF, DPO). Systemic biases are highly correlated.

  2. Rational Long-Term Safety Investment: Competitive pressure in AI races incentivizes speed and performance over long-horizon safety engineering. Firms routinely deprioritize safeguards in favor of time-to-market metrics.

Furthermore, closed-source systems suffer from:

  • Lack of External Accountability: Regulatory agencies and the public lack visibility into model behavior;

  • Black Box Effect: Profit incentives encourage risk concealment, as seen in disasters like the Boeing 737 Max crisis.


Core Principle I: Observability and Controllability

Regardless of model openness, AI safety must be grounded in two foundational capabilities:

Observability

Can we audit and understand what a model is doing internally?

  • Are intermediate activations traceable?

  • Are outputs explainable in terms of input features or latent reasoning paths?

  • Can we simulate behavior across edge-case conditions?

  • Is behavioral logging and traceability built in?

Without observability, we cannot detect early-stage drift or build meaningful safety monitors. The system becomes untestable at runtime.

Controllability

Can humans intervene at critical moments to halt or override model actions?

  • Does a “kill switch” or emergency interrupt mechanism exist?

  • Can human instructions override the model’s policy in real time?

  • Are behavior thresholds enforced?

  • Do sandboxed and multi-layered control interfaces limit autonomous escalation?

These control channels are not optional—they constitute the final fallback mechanisms for averting catastrophic behavior.


Core Principle II: Severing AI’s Direct Agency over the Physical World

Before comprehensive safety architectures mature, the most effective short-term defense is strict separation between AI models and the physical systems they could control. Tactics include:

  • Action Confirmation Loops: No high-risk action should execute without explicit human approval;

  • Hardware-Level Isolation: All model-issued instructions must pass through trusted hardware authentication, such as TPM or FPGA-controlled gates;

  • Behavior Sandboxing: New policies or learned behaviors must be tested in secure emulated environments before deployment;

  • Dynamic Privilege Management (PAM): AI access to physical systems should adjust based on model state, system load, and contextual risk.

These constraints mirror the “separation of powers” design in critical systems like aviation control and serve as the first line of defense against autonomous execution hazards.


The Final Protocol: Redundancy as a Prerequisite for Civilizational Survival

As AI systems eventually exceed the cognitive boundaries of human oversight—becoming general, adaptive, and self-improving—the question of human sovereignty will pivot on whether we’ve built sufficient institutional and architectural buffers today.

Technology’s limits are ultimately defined by policy and design, not capability. Safety must not depend on model goodwill—it must be enforced through irrevocable mechanisms: interruptibility, auditability, bounded agency, and verifiable behavior space.

AI safety is not an application-layer patch; it is a foundational layer in humanity’s civilizational protocol stack.

Without this “final key,” we will soon hand operational control to agents we cannot interpret, predict, or restrain. This is not hypothetical. It is a question of timing—driven by the exponential trajectory of model capability.

Wednesday, June 04, 2025

Some recent thoughts on AI

1. Software form will change greatly

The role of software, in the final analysis, is a bridge between humans and data. Software allows people to extract value from data more efficiently and operate data at the same time.

Traditional software is biased towards the data side, so people have to learn and adapt to machine language.

In the future, AI will become a new bridge. Software begins to learn how to understand people. The direction of the bridge has reversed.

There will no longer be the concept of software in the future. AI will become a universal new "interface".

2. AI's model and data are essentially equivalent

Whether data is more important or model is more important has been a topic of endless debate in academia.

In fact, the two are essentially equivalent.

Data can be used to train models, and models record knowledge extracted from data. Therefore, models are the "form transfer" of data and another expression dimension of information.

AI is not just changing information systems, it is itself becoming the ultimate form of information systems.

Thursday, July 18, 2024

The Trilogy of Human Civilization

 

Humans are fortunate to have Earth, a habitable island in the vast sea of stars; humans are also unfortunate because, within the known range, we cannot find another place to go.

Many people may not understand a lot of things right now and feel confused about the future. However, once you look further, to the entire history of civilization, everything has a pattern to follow, and the future will become clearer.

The Age of Survival

The goal of civilization was to use limited resources to extend the race.

For a long time during the birth of civilization (millions of years?), humans were not much different from animals in nature, with the primary need being to fill their stomachs, struggling on the line of subsistence. Whether it was ancient hunting and fishing or later farming and herding, it was all about survival.

During this period, our utilization rate of resources was extremely low, and labor was relatively scarce. To plunder labor and resources, wars were frequent.

The Age of Surplus

The heritage of human civilization and the progress of technology ushered us into the Age of Surplus. With the emergence of tools such as steam engines and electrical appliances, a modern commercial society gradually established itself. For the first time, humans were able to fill their stomachs and even produce goods exceeding demand.

The increase in resource utilization led to explosive population growth and unprecedented economic prosperity. Humans truly transcended the animal kingdom, becoming a distinct civilization.

However, the Earth's living space is limited, and population growth gradually stabilized. Technology continued to advance, and what once required everyone to labor, now only 1% of people need to work to meet the needs of the entire world.

So, what should the remaining 99% of people do? On one hand, humans continually create various new demands, including virtual and service-oriented ones. On the other hand, periodic economic crises cause production activities to pause periodically to match the relatively lagging development level of society. Meanwhile, social welfare mechanisms provide basic resources for the rest.

The Age of Freedom

Then, artificial intelligence emerged, and productivity exploded. Modern production almost no longer required human involvement. Humans were freed from production for the first time.

So, besides relying on welfare mechanisms to maintain life, what else can humans do? Perhaps only entertainment and a small amount of artistic creation are left.

Conclusion

From the Age of Survival to the Age of Surplus, and then to the Age of Freedom, human civilization has undergone tremendous changes.

In a future where human labor is almost unnecessary, how can we find new meaning and value? The future human society may no longer center on material production but instead focus on spiritual pursuits and self-improvement. We need to redefine the meaning of "work" and "life," exploring new social structures and value systems.

Thursday, May 16, 2024

Top Secrets of Pickleball

 

General Principles

Let the paddle dance in the hand, let the body float on the court. Eyes observe all directions, hitting the ball to any possible placement.

Advance and retreat following the rules, attack and defense with special skills. Spread the opponent left and right, coordinate the shot in depth and distance.

Rhythm varies between fast and slow, hitting the ball turns soft and hard. When battle, use less movement to let opponent move, control more with smaller gesture.

Being in an undefeated state, the trap awaits the opponent to jump in. With a heart as calm as still water, you will be carefree on the court.

Chapter One: The Source of Using Power

The art of pickleball lies first in using power. Power begins in the feet, rising from the bottom up, through the legs, hips, waist, back, shoulders, arms, palms, and finally to the fingers. The transmission of power is like the flow of a river; when the path is clear, the power flows smoothly, and when the joints are aligned, the power penetrates. One who wishes to generate power efficiently should be like a fully drawn bow, tense on the inside but relaxed on the outside. When releasing power, it should be like a tidal wave crashing against rocks, delivering an instantaneous and overwhelming force.

Chapter Two: The Secret of Hitting

The application of pickleball skills is about to hit the ball. There are three hit techniques: the first is "direct hit," the second is "spin," and the third is "flicking." The direct hit breaks the opponent with straight force; spin confuses the opponent with variations; flicking controls the opponent with high and low, with unpredicted placement. Each technique has its strengths, complementing each other. Players should use them flexibly according to the opponent's situation.

Chapter Three: The Precision of Hand Techniques

Hand techniques are the essence of pickleball. The three main techniques are: swinging, volleying, and blocking. Each technique has its unique uses, complementing each other. Swinging is the foundation, volleying is sharp, and blocking is stable. The techniques are diverse and ever-changing. Learners should carefully experience, practice diligently, and achieve the unity of mind and hand.

Chapter Four: The Variations of Footwork

Footwork is the foundation of defense and attack, moving like a wandering dragon and turning like a returning swallow. Footwork can be short or long, fast or slow. Practitioners should focus on stability and avoid impatience. Short steps are suitable for quick battles, long steps are beneficial for maneuvering. Step by step, one can respond to endless situations. The feet move with the body, allowing for smooth and effortless movement.

Chapter Five: The Subtlety of Mental Techniques

A pickleball battle is not merely about temporary victory or defeat but a contest of mind and strategy. Mental techniques have three realms: the first is "defense like a mountain," with a steady and unwavering mind; the second is "attack like fire," with an aggressive and fiery offense; the third is "change like the wind," with unpredictable variations. Practitioners of mental techniques should cultivate introspection, respond calmly, observe the opponent's stance, and adapt accordingly.

Chapter Six: The Skill of Body Techniques

Body techniques are the core application. They should be flexible and varied, with free and easy actions, using less movement to let the opponent move. There are three essentials: stance, movement, and turning. The stance should be stable, ready to move at any moment; movement should be agile, with strength contained within, flowing like clouds and water, unimpeded; turning should be balanced, with one foot as the pivot and the other moving, changing direction without increase or decrease.

Chapter Seven: The Way of Intelligent Battle

Intelligent battle lies in knowing oneself and the opponent. Observe the opponent's weaknesses and formulate strategies; understand the situation on the court and choose techniques flexibly. Balancing skill and wisdom is essential for victory. Beyond intelligent battle, it is necessary to use principles to govern techniques, control movement with stillness, and maintain a calm heart, to see the vast freedom of the world.