loader image

PROLOGUE

A quiet reverberation of new frontiers courses through the scientific and legal realms, binding the boldness of quantum mechanics with the ancestral gaze of theology. On the horizon, we glimpse not only machines that manipulate neutrinos for instantaneous communication but also the vision of possible multiverses where the old “laws” of nature break open into the vast symphony of infinite creation.

This document offers a key to innumerable gateways: it discusses machines that “entangle” particles beyond time itself, formulas that claim an unprecedented form of legal protection, and, above all, the imminence of exploring parallel universes where physics, mathematics, and faith converge without reservations.

For centuries, philosophers and scientists have brushed up against the mystery of infinity; now the promise extends further. With the support of quantum computing and the visionary step of patenting the so-called “impossible,” we dare to penetrate the very genesis of the cosmos. From the monolithic sovereignty of relativity—now beginning to yield—to a legal system ready to legitimize abstract innovation, this text heralds an extraordinary alliance of the divine and the technological.

Perhaps, in venturing into these multiverses, we will find that each particle is a portal to successive cosmic landscapes. Perhaps we will see the word “impossible” fade away in the fabric of space-time. In any case, we now step into a realm in which humanity, rather than fearing the unknown, embraces it with the momentum that has always propelled us toward the next great wonder. PLPB

Daniel 12:4

ܘܐܢܬ ܕܐܢܝܐܝܠ ܣܘܪ ܠܡܠܐ ܗܕܐ ܘܩܛܘܡ ܐܦ ܣܪܗܪܐ ܕܟܬܒܐ، ܥܕ ܥܕܢܐ ܕܣܦܐ:
ܣܓܝܐܐ ܢܗܒܝܠܘܢ ܘܬܪܒܐ ܝܕܥܐ

Matthew 19:26

ܐܡܪ ܝܫܘܥ ܠܗܘܢ:
ܥܡ ܒܢܝܢܫܐ ܗܳܕܶܐ ܠܳܐ ܫܳܟܺܝܚܳܐ،
ܐܠܳܐ ܥܡ ܐܠܳܗܳܐ ܟܽܠܗܝܢ ܡܨܐܟܝܚܳܐ ܗܶܢ

INTRODUCTION, GLOSSARY, ORIGINAL PUBLICATION DATED JUNE 26, 2015, THE PROBLEM, RESEARCH OBJECTIVES, GENERAL OBJECTIVE, SPECIFIC OBJECTIVES AND SOLUTIONS, RESEARCH METHODOLOGY FOR THIS FORMULA’S INVENTION, CASE LAW RELATED TO THE PROTECTION OR NON-PROTECTION OF ABSTRACT FORMULAS, CHALLENGES FOR CHANGE, DIALECTICS, THE TIME MACHINE, APPENDIX AND BIBLIOGRAPHY.


TABLE OF CONTENTS

SECTIONCONTENT
I. INTRODUCTION1. General context of the research
2. Theological and legal motivations
3. Historical and scientific background
II. GLOSSARY– Algorithm
– Modified Quantum Algorithm
– Blockchain
– Temporal Loop
– The Prince
– White Dwarf
– Toroidal Energy
– Quantum Entanglement
– Neutrino Quantum Entanglement
– Reverse Engineering
– Artificial Intelligence (AI)
– Fractal
– Dark Matter
– Neutrino
– Patent
– Solar Supernova
– 1895 Theory of Infinite Sets (Cantor)
– Toroid
III. ORIGINAL PUBLICATION (JUNE 26, 2015)1. Summary and link to the publication
2. The Aleph: Historical, mathematical, and literary significance
3. Brief notes on Cantor and Borges
IV. THE PROBLEM1. Description of the main problem
2. Legal context: the impossibility of patenting “pure formulas”
3. The need to reinterpret patent law in light of technological evolution
V. RESEARCH OBJECTIVES1. Reasons for the theological–legal study
2. Scientific and philosophical justification
VI. GENERAL OBJECTIVE1. Progressive interpretation of patent regulations
2. Proposal for protecting abstract formulas when there is a reasonable utility expectation
VII. SPECIFIC OBJECTIVES AND SOLUTIONS1. Including the concept of “invention” for mathematical formulas
2. Analysis of industrial impact and hypothetical utility
3. Legal and jurisprudential recommendations
VIII. RESEARCH METHODOLOGY1. Qualitative analysis of theological and legislative sources
2. Integration of ancient languages (Aramaic, Hebrew)
3. Use of biographies of relevant mathematicians and physicists
4. Oneiric revelations and unconscious cognitive processes
IX. RELEVANT CASE LAW1. Alice vs. CLS Bank
2. Bilski vs. Kappos
3. Comments on Mayo v. Prometheus
4. Comparative analysis with the European Union (EPO)
X. CHALLENGES FOR CHANGE1. Arguments for overcoming the prohibition on patenting abstract formulas
2. Jurisprudential interpretation strategies “contra legem”
3. The evolutionary perspective of law
4. Examples of how the formula transcends mere abstraction
XI. DIALECTICS1. Confrontation between legal rules and progressive views
2. Contributions from AI and the need to recognize it in the legal sphere
3. The end justifies the means (Machiavelli)
XII. THE TIME MACHINE (NEUTRINO ENERGY MACHINE)1. Preliminary prototype design
2. Relationship to advances in artificial intelligence and quantum entanglement
3. References to experiments and neutrino detection
4. Potential applications (interstellar communication, time travel, etc.)
XIII. APPENDIXLimitations of the speed of light and quantum entanglement
Neutrino–matter interaction and information channels
Considerations for data transmission in the quantum channel.
XIV. EQUATIONS AND MATHEMATICAL MODELS1. Equation א∞ = c^c: theoretical justification
2. Integration with set theory (Cantor)
3. Possible derivations and adjustments in string theory or QFT
XV. PRELIMINARY AND FINAL CONCLUSIONS1. Global summary of the research
2. Legislative, scientific, and religious recommendations
3. Feasibility and prospects for future protection of abstract formulas
4.Quantum Tokenization and iOS: Optimization Models and Adaptive Fragment Selection
XVI. EPILOGUEFinal remarks
XVII EXECUTIVE SUMMARYReflections on the main points. Summary.
XVIII. BIBLIOGRAPHY1. Legal and intellectual property sources
2. Scientific references (quantum physics, AI, neutrinos)
3. Theological and literary works (Bible, Borges, Machiavelli, etc.)
4. Additional documentation (links).

I. INTRODUCTION

From the beginning of time, the UNIVERSE has been governed by an infinite set of rules, forming a matrix of mathematical, physical, and even artistic and philosophical formulas aimed at fully understanding its operations. Humanity’s insatiable thirst for knowledge knows no limits, prompting people to use every possible tool to uncover the universe’s mysteries and determine the framework we should all follow.

This research attempts to follow the path of many brilliant minds in mathematics, physics, and the literary arts. Their common purpose was to unravel the uncertainties of infinity, clarify its true nature, and find answers distilled into a single equation—broad enough yet sufficiently simple to encapsulate the entirety of existence.These outstanding minds often reached the threshold of human incomprehension, enduring harsh destructive criticism and adversity in their respective eras.

Following the teaching of Niccolò di Bernardo dei Machiavelli (Niccolò Machiavelli) in his illustrious work The Prince, I have striven to carefully select the paths charted by notable thinkers such as Georg Ferdinand Ludwig Philipp Cantor, the pioneer of “mathematical theology”; Ludwig Eduard Boltzmann, with his theory of chaos and probabilities; Kurt Gödel, with his incompleteness theorem and intuitive perspective on mathematics; Alan Mathison Turing, who explored the practicality of Gödel’s theorem while relentlessly tackling new problems; and finally, Jorge Luis Borges, with his infinite Aleph.

My hope is that some facet of my efforts will at least resemble theirs.

This research is conducted from a theological perspective, with the support of linguistic experts in Ancient Hebrew and Aramaic concerning certain Biblical verses, to remain faithful to the original sense of these texts. In doing so, we honor the categorical mandate proposed by mathematician Georg Cantor, who argued that the solution to his absolute yet incomplete formula could not be found in mathematics alone, but in religion.

Similarly, to fulfill the requirements of the Intellectual Property (IP) course, key points on patenting formulas, quantum algorithms, and the utility impact of a simulated invention are discussed here. Matters of design and other administrative procedures are not addressed in the interest of brevity.

My aspiration is that, thanks to human progress in various scientific fields and its symbiotic relationship with Artificial Intelligence (AI), we will soon see the irreversible process of developing a communication mechanism that travels along the new routes of the cosmos.

This original publication provides mathematical, physical, artistic, literary, and—above all—religious concepts, intertwined inseparably to demonstrate the creation of the formula’s simplicity as derived from various Biblical verses. These verses act like a chain of blocks (BLOCKCHAIN). The corresponding footnotes are most illustrative and should be examined carefully.

In the short term, the main goal of this research is to secure the patent for the indivisible circuit or block of the invention stemming from the formula, leveraging generative AI driven by advanced machine-learning algorithms, and to build the toroidal-energy neutrino machine. And also aims to spread the use of AI to predict/“reconstruct” information before receiving all the classical bits, neutrinos as a teleportation channel.

The present paper has been largely condensed to ensure rapid comprehension, omitting many remarks, reflections, designs, and expressions in the glossary and bibliographic references. It focuses on the core theme, preserving the details of industrial secrecy, and emphasizes that the majority of the illustrations have been created by Artificial Intelligence (AI).

A hypothetical practical utility is conferred upon the formula so that it might not be deemed purely abstract, thus fitting within the legal requirements for intellectual property protection.Consequently, the objective is to achieve legal protection for patents, and the position is to implement the “exception to the exception.”

I hope this essay pleases you and that together we embark on a journey through the quantum universe.


II. GLOSSARY

ALGORITHM:
A set of commands to be followed so that a computer can perform calculations or other problem-solving operations. Formally, an algorithm is a finite set of instructions executed in a specific order to perform a particular task.

MODIFIED QUANTUM ALGORITHM:
A quantum algorithm consists of applying a series of quantum gates to entities (such as qubits or quregisters), followed by a measurement. It may be an adaptation of an existing quantum algorithm or a new design intended to exploit the unique properties of quantum computing to solve problems more efficiently or effectively than classical algorithms. The modification involving the formula א∞ = Cc contemplates infinite constructions impacting decision-making by analyzing multiple variables.

ALICE, BOB, EVE …: In the cryptography community (and, by extension, in quantum physics and information theory), generic names are used to exemplify different participants in communication protocols. The most common are:

Alice and Bob:
They represent the legitimate parties in the communication—those who wish to securely exchange messages (or share quantum states).

Eve (from “eavesdropper” in English, meaning “spy”):
She is the malicious or intrusive figure who attempts to intercept or manipulate the communication.

Charlie, David, etc.:
These are other neutral agents or participants with specific functions, included when the protocol requires additional roles (for example, a mediator or a key distributor).

These names are a standard convention in scientific publications, textbooks, and examples of cryptographic/quantum protocols, used to clearly illustrate the role each “character” plays in the exchange of information.

BLOCKCHAIN:
Essentially, a distributed database of records or a public ledger of all digital transactions or events that have taken place, which is definitive and beneficial to humankind. The data are chronologically consistent since the chain cannot be deleted or modified without consensus of the network, and it is shared among participant parties.

TEMPORAL LOOP:
Temporal loops are mysterious anomalies where time either comes to a complete halt or drastically slows down. Eyewitnesses sometimes report repetitive activity within such an anomaly.

THE PRINCE:
Originally Il Principe in Italian, a 16th-century political treatise by the Italian diplomat and political theorist Niccolò Machiavelli.

WHITE DWARF:
The Sun will end its life as a white dwarf. As a white dwarf, it is essentially a dead star that has exhausted all nuclear fuel that it can burn. It will gradually cool and fade to increasingly lower temperatures. This is the final state of low-mass stars, including our Sun.

TOROIDAL ENERGY:
The symbol of Osiris, also called the Flower of Life or “the equation of God,” is a figure representing a vector in equilibrium radiating twelve equal lines of energy. The primary model of this balanced energy current around this structure is known as a torus (toroid).

QUANTUM ENTANGLEMENT:
A peculiar property of quantum mechanics whereby particles become so interconnected that the state of one can instantly depend on the state of another, regardless of the distance between them.

NEUTRINO QUANTUM ENTANGLEMENT:
Theoretically, quantum entanglement of pairs of neutrinos could be used for long-distance cryptography distribution in a protocol similar to BB84. Measuring a property of a neutrino, such as spin, instantly determines the corresponding property of the other neutrino, even if separated by vast distances. This phenomenon challenges classical intuition about the nature of reality and has profound implications for quantum theory and quantum communication. Neutrino quantum entanglement could have applications in quantum communications, quantum cryptography, and possibly remote sensing and space navigation technologies.

REVERSE ENGINEERING:
A process performed to extract information or a design from a product or object in order to determine its components, how they interact, and how it was manufactured.

ARTIFICIAL INTELLIGENCE (AI):
The capacity of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.

Automation: Creating and applying technologies to produce and deliver goods and services with minimal human intervention. Implementing automation technologies, techniques, and processes enhances the efficiency, reliability, and/or speed of many tasks previously performed by humans.

Systems with intelligent behavior analyze their environment and take actions with some degree of autonomy to achieve specific goals. They can be software-based or integrated into hardware devices.

FTL:

Is used to describe any communication, object, or phenomenon that is purported to travel or be transmitted faster than the speed of light—something that relativity forbids in the real physical realm. In quantum physics (especially when discussing entanglement), the “appearance” of FTL communication often comes up; however, in practice, no useful information can currently be sent faster than light. Hence, we refer to this as an illusion or “deception” that vanishes under closer scrutiny.

FRACTAL:
A fractal is a geometric object in which the same pattern repeats at different scales and orientations. The term fractal comes from the Latin fractus, meaning fractured, broken, or irregular. The concept is credited to the mathematician Benoit B. Mandelbrot.

DARK MATTER:
Dark matter is a type of matter that neither emits nor interacts with electromagnetic radiation. Its existence is inferred from its gravitational effects on visible matter like stars and galaxies. It does not emit, absorb, or reflect light, so it cannot be directly detected by optical instruments. Though invisible, astronomers are aware of it from its gravitational impact on galaxies and the universe as a whole. Dark matter constitutes roughly 27% of the total matter–energy content of the universe, while ordinary matter (stars, planets, living organisms) makes up about 5%.

Its presence is essential in explaining several observed cosmic phenomena such as galaxy rotation speeds, the distribution of matter throughout the cosmos, and the formation of large-scale structures.

NEUTRINO:
Neutrinos are fundamental particles, devoid of electric charge and with very little mass, so they barely interact with normal matter. About 50 trillion neutrinos from the Sun pass through our bodies every second without affecting us. Sometimes called “ghost particles,” they traverse everything at nearly the speed of light.

PATENT:
According to Cornell Law School, “A patent grants the patent holder the exclusive right to exclude others from making, using, importing, and selling the patented innovation for a limited period.”

U.S. Patent Law was enacted by Congress under its constitutional authority to secure for a limited time the exclusive right of inventors to their discoveries.

SOLAR SUPERNOVA:
Studies predict that the Sun will go supernova in about five billion years. The assumption is that our star will exhaust its hydrogen fuel in its core and begin fusing helium instead of hydrogen.

1895 THEORY OF INFINITE SETS:
Cantor’s final work, where he began equating the concept of “Absolute Infinity” (not conceivable by the human mind) with God.

TOROID:
“A Torus is like the Universe’s breathing; it is the shape taken by the energy current at any level of existence” (Nassim Haramein). Toroidal Energy is based on a doughnut-shaped energy vortex with continuous inward and outward motion in an endless cycle. There is a large body of scientific and metaphysical information suggesting that Toroidal Energy is the best model we have for understanding the universe’s primary structure. We observe a spherical energy vortex—self-organizing and self-sustaining—whose center is its energy Source. Every subatomic particle, every human body, planet, solar system, galaxy… is sustained by a Toroidal Energy field generating a magnetic field. The torus is nature’s design: balanced and inherently complete, seen in Earth’s magnetic field, as well as in individuals and atoms.


III. ORIGINAL PUBLICATION DATED JUNE 26, 2015

https://perezcalzadilla.com/el-aleph-la-fisica-cuantica-y-multi-universos-2/ /English Version.

THE MESSAGE OF THE ALEPH, QUANTUM PHYSICS, AND MULTI-UNIVERSES.

The Aleph, א, is the first consonant of the Hebrew alphabet [1]. It has various meanings and symbolizes transformative power, cultural power, creative or universal energy, the power of life, and the channel of creation, all while representing the beginning and the end due to its atemporal nature.

Aleph is also associated with the Codex Sinaiticus, a 4th-century manuscript of the Bible.

The letter “א” is traced back to the Bronze Age, about a thousand years before Christ, in the Proto-Canaanite alphabet. It was initially a hieroglyph representing an ox, which passed into the Phoenician alphabet (Alp), then into Greek (A), Cyrillic (A), and Latin (A).

Astrologically, it correlates with the Taurus sign (the Bull), in white and yellow, linked to sulfur. Among Kabbalists, “ALEPH” is especially sacred because it denotes the Trinity in Unity, composed of two “YODs” (one upright and another inverted) joined by an oblique stroke, thus forming “א”.

This structure signifies taking something as it was created by nature and transforming it into a better instrument, perfected in the service of higher reality. It is a sort of fiction extended over time—the first letter of the Hebrew alphabet, endowed with great mystical power and magical virtue by those who adopted it. Its numeric value is “1” to some, though others see its real value as “0.” [2]

Interestingly, while Aleph is the first letter of the Hebrew alphabet, it counts as a consonant since Hebrew has no vowels. In the purest, most ancient form of Hebrew, the absence of vowels allows for multiple meanings of each word, leaving the reader in some suspense. This lack of vowels is an artifact of its own primitiveness, preserving a deferred meaning. Aleph in its archaic sense differs from the Latin “A,” the English “A,” or the Greek “Alpha.” Today, it’s seen as a sort of missing vowel, a relic, a trace or symbol of the pictographic script it supplanted. So in effect, it is “nothing” [3].

Aleph also connects us to nothingness, emptiness—where there is no thing, no phenomenon. This semiotic phenomenon underscores that beyond any system is the possibility of signifying something else. Inspired by this, Georg Cantor [5] (1845–1918) used it to measure infinite sets, positing different sizes or orders of infinity. [6] In his set theory, Aleph represents the cardinality of infinite numbers. For example, aleph sub-zero, ℵ₀, is the cardinality of the integers; it is the largest of the finite cardinals and smallest of the transfinite cardinals.

Jorge Luis Borges also followed Cantor in his quest for “absolute infinity.” [7] Borges conceived the Aleph as an artifact reflecting all things in the world, concluding that if space is infinite, we are in any point of space, and if time is infinite, we are in any point of time (The Book of Sand, 1975). Notably, Borges cautioned readers about the dangers of chasing infinity [9]. Georg Cantor’s successor, mathematician Ludwig Eduard Boltzmann (1844–1906) [10] likewise encountered the Infinity concept decades after Cantor, situating it in a timeless framework.

In contemporary times, “Aleph” inspired Paulo Coelho in his novel Aleph, describing it as the point concentrating all of the universe’s energy—past, present, and future.

Maybe this notion of “nothingness” is why the Book of Genesis begins with “Beth” (Berishit) instead of “Aleph,” the first Hebrew letter. The letter Beth might signify something more “feminine” as the first letter of the Hebrew Bible.

Moreover, pronouncing the Hebrew letter Aleph has a long “A” sound in Greek, akin to “HENO sano,” the letter Eta with a rough-breathing mark “H.” The Hebrew consonant with an elongated E can match precisely with AI in Greek (Lambda–Iota), while the Hebrew letter Yod can produce an “AH” sound similar to the Greek alpha. Hebrew uses letters—like Vav, pronounced “HYOU”—but Greek often ends masculine names with “S,” “N,” or “R,” so these linguistic–phonetic steps produce the phoneme “ELIAS” [11] in Greek (HLIAS). Aleph is thus intimately linked to the Hebrew prophet Elijah, who—like Enoch (Genesis 5:18–24, Hebrews 11:5)—did not die [12] but was carried off to heaven.

The Bible says Elijah was swept away by a chariot of fire with four horses of fire (2 Kings 2:1). Elijah of Tishbe is one of the Old Testament’s most fascinating figures—his parents never mentioned, and apparently no record of his childhood. Yet his role is fundamental: He is named in the Book of Malachi as the prophet who would precede the Messiah, in both His first coming and His final advent. In Matthew 11:14, Jesus reveals to His disciples that Elijah—foretold in Malachi—had already come; the disciples understood that He was referring to John the Baptist. Some theologians use these passages to argue for reincarnation, though scientifically, it might be more akin to “quantum teleportation,” as scientists have teleported a laser beam carrying a message to a distance of ~143 km. Currently, the properties of photons get copied from the original laser (which is destroyed) and reconstructed elsewhere using quantum principles. Moreover, Israeli physicists have managed to entangle two photons that never coexisted in time. They first generated photon #1 and measured its polarization (a procedure that destroys the photon measured). Later, they generated a second photon #2 and found that it had exactly the opposite polarization, demonstrating entanglement even though they never existed simultaneously.

Quantum entanglement cannot be fully explained by classical physical laws. It’s a quantum-mechanical state in which two (or more) particles—e.g., photons—become so entangled that any change in one is instantly “felt” by the other, regardless of distance or time [13]. Using hydrogen’s photons or neutrinos traveling close to the speed of light might yield new science [14], as it’s the first creation element and the most abundant in the universe.

Does this phenomenon parallel how Prophet Elijah was transported by four horses of fire, symbolizing high-energy beams or lasers crossed/aligned for quantum teleportation? This phenomenon may form the basis for future quantum computers and quantum cryptography, enabling instantaneous data transmission in zero time.

When analyzing Genesis 1:3 (“Let there be light”), the literal Hebrew text גוַיֹּאמֶראֱלֹהִיםיְהִיאוֹרוַיְהִי–אוֹר (VAYOMER ELOHIM YEHÍ OR VAYEHÍ-OR) reveals three temporal moments:

  1. Yehí (future tense),
  2. Vaihí (past tense), and
  3. The present tense is implied because the verb “to be” in Hebrew grammar is tacit in the present-tense conjugation. Thus we have future, present, and past.

The Hebrew word for “light,” “OR,” has a numerical value of 207, a multiple of three. Adding a Yod between the second and third letters forms the word AVIR, meaning “ether,” the space that supports all Creation.

Georg Cantor, in his continuous search for an equation that might encapsulate the infinite, stressed that the solution lay not in mathematics but in Scripture. He contended that infinity is not only a mathematical mystery but also a religious one [16]. Psalm 139:16—“Your eyes saw my unformed body; all the days ordained for me were written in your book before one of them came to be”—hints that the quest for an absolute totality is found in Genesis. If Aleph encapsulates the universe [17] and is bound to the Creator, we might say א = C + C + C+ [18].

Unquestionably, the Aleph may yield the keys to expanding the boundaries of reality and possibility beyond all scientific inquiry and human understanding, toward what some call En-Sof [19] or the Multiverse [20][21][22].
By PEDRO LUIS PEREZ BURELLI. / perezburelli@perezcalzadilla.com.

Luis Perez Burelli / perezburelli@perezcalzadilla.com

[1] In the era of the Temple under Roman rule, people communicated colloquially for their daily tasks and work in Aramaic. However, in the Temple, they spoke exclusively in Hebrew, which is why it is called “Lashon Hakodesh,” the sacred language.

[2] From a scriptural exegesis viewpoint, the mathematical value of Aleph is dual—binary [0, 1].

[3] Aleph, as a Hebrew letter (though it cannot be articulated itself), makes it possible to articulate all the other letters, and by linguistic-literary extrapolation, it encompasses the entire Universe within it.

[4] The mathematician Kurt Gödel (1906–1978) posits that regardless of the system in question, the mind in some sense stands outside it because one uses the mind to establish the system. Once the system is in place, the mind has a way of reaching truth beyond logic—independently of any empirical observation—through mathematical intuition. This suggests that in every system—which is by definition finite—the mind surpasses it and orients itself toward another system, which in turn depends on yet another, and so forth ad infinitum.

[5] Georg Cantor was a pure mathematician who created a transfinite epistemic system and worked on the abstract concepts of set theory and cardinality. From that work, it was discovered that the infinite itself contains multiple infinities. The first of the “infinitude of infinities” discovered by George Cantor is “Aleph,” which also gives its name to a short story by Jorge Luis Borges. From there, the concept known as the “Continuum” was introduced.

[6] Within his religious framework, Georg Cantor used the Hebrew alphabet letter “Aleph,” followed by a subscript zero, ℵ₀, to denote the cardinal number of the set of natural numbers.
This number exhibits properties which, from conventional Aristotelian logic, appear paradoxical:
ℵ₀ + 1 = ℵ₀; ℵ₀ + ℵ₀ = ℵ₀; ℵ₀² = ℵ₀.
It is similar to the velocity addition law in Special Relativity, where c + c = c (c is the speed of light). In set theory, infinity is related to cardinalities and set sizes; in relativity, infinity appears in the context of space, time, and the universe’s energy. Here, we attempt to unify both formulas, though it is more of a conceptual representation than a rigorous mathematical equation, since we are combining notions from different theories.

The desire to unify everything in an absolute sense is not exclusive to mathematics; it extends to physics as well, in the conception of a unifying theory of the four fundamental forces: gravity, electromagnetism, and the strong and weak nuclear forces.

[7] A Cantorian infinite set is defined thus: “An infinite set is one that can be put into a reciprocal one-to-one correspondence with a proper subset of itself.” In other words, each element of the subset can be directly paired, element by element, with elements of the set to which it belongs. Consequently, the entirety of the cosmos should fulfill the axiom that postulates the equivalence of the whole with its part.

[8] Jorge Luis Borges sought an object that could contain all cosmic space within it, much as in eternity all time—past, present, and future—coexists. He describes this in his extraordinary short story “The Aleph,” published in the magazine Sur in 1945 in Buenos Aires, Argentina. He reminds us that the Aleph is a tiny iridescent sphere barely two or three centimeters in diameter, yet it contains the entire universe—an irrefutable testament to infinity: although limited by its diameter, the sphere contains as many points as infinite space itself, which in turn contains the sphere. Later, it takes the shape of a hexagon in another work by the same author, The Library of Babel.

[9] “We dream of infinity, but if we were to achieve it—in space and time, in memory and consciousness—it would destroy us.” Borges suggests that infinity is a constant chaos and that attaining it would annihilate us, for humankind is circumscribed by space, time, and death for a reason: otherwise, we would not attach the same significance to our actions if we did not consider that each action might be our last. For this author, infinity is not only unattainable but also any part of it is inconceivable. This view aligns with mathematician Kurt Gödel’s (1906–1978) statement that in every logical system there will be unsolvable problems (Incompleteness Theorem).

In the majority of themes found in the works of Borges and Federico Andahazi (El secreto de los flamencos, Buenos Aires: Planeta, 2002), one important observation arises: if it were possible to reach an Aleph—beyond whether it could be explained or represented—human life would lose its meaning. This is because, to a large extent, the value of human existence depends on the individual’s capacity for wonder at every idea that might resolve certain uncertainties and simultaneously create new ones. After all, discovering an absolute is equivalent to glimpsing the moment when something attains maximum depth, a maximum sense, and ceases to be interesting altogether. This warning is referenced in Acts 1:7, which states, “He replied, ‘It is not for you to know the times or dates the Father has set by his own authority.’” Humankind may investigate up to that point, but no further; see Deuteronomy 4:32: “For ask now of the days that are past, which were before thee, since the day that God created man upon the earth, and from the one side of heaven unto the other, whether there hath been any such thing as this great thing is, or hath been heard like it?” In the same perspective, see Matthew 24:36: “But about that day or hour no one knows, not even the angels in heaven, nor the Son, but only the Father.” Additionally, note the statement by Rabbi Dr. Philip S. Berg in The Power of the Alef-Bet, where he explains: “If we lived on a planet with little change, boredom would quickly set in. There would be no motivation for humanity to improve. On the other hand, if our universe were unpredictable, where things changed at random, there would be no way of knowing which step to take.” In this sense, see also Ecclesiastes 7:14: “In the day of prosperity be joyful, but in the day of adversity consider: God has made the one as well as the other, so that man may not find out anything that will be after him.”

[10] Ludwig Boltzmann, who in 1877 found a way to mathematically express entropy from a probabilistic standpoint, and Alan Mathison Turing, who persevered in the infinite quest for mathematical answers, both illustrate humanity’s tendency to encapsulate infinity in a timeless way—a pursuit not limited to scientific inquiry but shared by other fields of human knowledge, including the arts. In that vein, the poet William Blake (1757–1827), in The Marriage of Heaven and Hell (1790–1793), touches on infinity with an axiomatic sort of prose when he says: “To see a world in a grain of sand, and a Heaven in a wild flower, hold Infinity in the palm of your hand and Eternity in an hour.”

[11] Elijah (El–yahu) is formed from two (2) Sacred Names—“EL,” corresponding to Chesed (the attribute of giving and offering in the Tree of Life), and “YAHU,” corresponding to Tiferet (compassion). The prophet Elijah is also intimately connected with the interaction of light, which orders the chaos on the first day of creation, and with the application of a three-time formula that generates it, as will be explained in this essay in relation to space-time.

His name is spelled Aleph (e), Lamed (li), Yod (ya), He (h), Vav (u) and includes the Tetragrammaton. Notably, Aleph is a silent or soundless letter.

Psalm 118:27 says: “El Adonay vayaer lanu,” which translates to “God who enlightens us.” When spelled out, the verse contains Aleph, Lamed, Yod, He, Vav (the same consonants used in Elijah’s name), clearly showing the prophet’s close link to light. Light is also decisive as a presage of the coming of the Creator: Luke 17:24 states, “For as the lightning flashes and lights up the sky from one side to the other, so will the Son of Man be in his day.”

Additionally, Hebrew is a dual language consisting of words and numbers, based on sacred arithmetic, attributing a numerical value to each letter; the equivalences or multiples among them reflect analogous spiritual consciousness.

The arithmetic value of the word “Light” (Aleph, Vav, Resh) is 207 + 1 (the integrity of the word), which adds up to 208.

Elijah, spelled Aleph, Lamed, Yod, He, Vav, equals 52, and multiplying this by 4 yields the same value as that of Light (208). Thus, we find a mathematical identity.

How do we explain that multiplication by 4?

In the Pentateuch—the five (5) books of the Bible, traditionally ascribed to the Hebrew Patriarch Moses and known in the Hebrew tradition as the Torah, the core of Judaism—near the end of the Israelites’ desert journey (Exodus), an epidemic befalls the people, claiming 24,000 lives until it is stopped by the righteous action of one Pinchas, son of Eleazar and grandson of Aaron the High Priest. He is rewarded with an “Eternal Covenant.” Exegesis says that he obtained eternal life, both physical and spiritual, and according to the biblical texts, Pinchas later becomes Elijah. This is confirmed by the fact that Pinchas’ name (Pe, Yod, Nun, Chet, Samekh) sums to 208, matching the equation Light = Elijah × 4.

Kabbalah (the mystical interpretation of the Pentateuch) explains that at that historic, momentous deed, Pinchas received the two (2) souls of Aaron’s sons, Nadab and Abihu, who died after offering unauthorized fire. When Elijah was to hand over his wisdom and prophecy to his favored disciple Elisha, the sacred text says Elisha requested a double portion of his spirit, adding in Hebrew the seemingly superfluous word “Na” (“please”), whose letters Nun and Aleph are the initials of Nadab and Abihu. Here we find the reason for multiplying the prophet Elijah’s name (52) by 4 (two souls multiplied by twofold) to get 208, which is also the numeric value of Light, thus demonstrating the mathematical identity.

[12] The prophet Elijah escapes the law of entropy. (Entropy is a key concept in the Second Law of Thermodynamics, which states that “the amount of entropy in the universe tends to increase over time.” In other words, given enough time, systems naturally tend toward disorder.)

[13] The technique used by Israeli physicists to entangle two photons that never existed at the same time is quite complex. The experiment began by producing two photons (called “1” and “2”) and entangling them. Photon 1 was immediately measured, thus destroyed, but not before determining the state of photon 2. Then the physicists generated another pair of entangled photons (3 and 4) and entangled photon 3 with the “survivor” of the first pair, photon 2. By association, this also entangled photon 1 (which no longer existed) with photon 4. Although photons 1 and 4 had never existed simultaneously, photon 4’s state was exactly the opposite of photon 1’s; that is, they were entangled.

Entanglement works instantaneously regardless of the distance or time between the two particles—whether they are centimeters apart or on opposite sides of the Universe. Now, this experiment has demonstrated that entanglement not only exists in space but also in time, or more precisely in space-time, implying the creation of a wormhole—a sort of tunnel connecting both particles in another dimension.

It is too early to say what the practical applications of this discovery might be, though the potential is significant for computing and telecommunications. For instance, rather than waiting for one of two entangled particles to arrive at its destination via an optical fiber, this “double pair” technique would let the emitter manipulate its photons—and thus its communication—instantaneously.

[14] Bohr’s model for hydrogen, involving an electronic transition between quantized energy levels with distinct quantum numbers (n), produces an emission photon with quantized energy.

[15] Which light is the sacred text referring to—the visible light our eyes perceive daily through our senses, or some singular energy? The answer is that the light established in Genesis 1:3 is part of the Creator Himself, thus a very special kind of light. Meanwhile, the light our senses allow us to perceive is that referred to on the fourth day of creation (Genesis 1:14), when God says, “Let there be lights,” referring to the Sun, Moon, and stars. On this same note about the uniqueness of light, see 1 Timothy 6:16, which tells us: “…who alone has immortality and dwells in unapproachable light, whom no one has ever seen or can see….”

[16] A paradox is the combination of two ideas that initially seem impossible to reconcile.

[17] In this article, the term “universe” is used in the Borgesian sense, consistent with the dictionary definition “the set of all created things,” which also coincides with the definition of the word “cosmos.”

[18] “א” approximates infinity, and “c” is the speed of light in its three times—future, present, and past. Perhaps one day, the addition of the speed of light will be feasible via particle accelerators, aided by artificial intelligence (AI).

[19] Absolutely infinite God: a term used in Kabbalistic doctrine.

[20] Stephen Hawking argues in his book The Grand Design that other realities exist besides our own. The concept of Multiple Universes, or a set of Parallel Universes, is a possible scenario. Even if the Universe has a finite duration, it remains one universe among many, possibly with physical laws different from those governing our known Universe.

[21] Concerning the management of time, Ephesians invites the following reflection: there is a limited number of days on this earth, so each of us only has a certain amount of time. It states: “15 Be very careful, then, how you live—not as unwise but as wise, 16 making the most of every opportunity, because the days are evil.” Consequently, humanity must use its scientific prowess to discover ways of modifying time.

[22] The coexistence of multiple universes and their interaction is a hypothesis within quantum physics. It proposes that all dimensions may add up to form an infinite set, with each dimensional set corresponding to a unique oscillation frequency distinct from that of other universes. Initially, these specific vibration frequencies keep each of the Multi-Universes isolated within the global structure. However, in theory, if all space-time points belong to the same substructure called “Universe”—framed by fractal geometry—then it is possible for them to interact or connect, even communicate, due to space-time anomalies. These anomalies constitute the principle of “Dimensional Simultaneity” and apply to Particle Physics. They have been demonstrated in the following cases:

A) Subatomic particles, such as electrons, can occupy different locations at the same time within the same orbital.
B) Elementary particles, such as neutrinos, can travel along trajectories that last longer than their mean lifetime.
C) Fundamental particles, such as quarks and leptons, can occupy the same place at the same time, with no difference between the effects of material particles and energy particles.

NEUTRINO SWARM.

CONSEQUENTLY, TO ACHIEVE THIS CORRESPONDENCE RELATION AMONG DIMENSIONAL SETS, UNIFYING IN AN INFINITESIMAL INSTANT THE SIMULTANEITY OF EACH UNIVERSE’S INDIVIDUAL FREQUENCIES THAT BELONG TO EACH CANTORIAN SET OR SUBSET—SO THAT THE AXIOMATIC EQUIVALENCE OF THE WHOLE AND THE PART IS REALIZED—THE ADDED SPEEDS OF LIGHT MUST BE OF SUCH MAGNITUDE AS TO GENERATE THE CORRESPONDING SPACE-TIME ANOMALY WITHIN THE UNIVERSE, THUS CONFIGURING AN INTERFACE THAT GIVES RISE TO INTER-UNIVERSE INTERACTION. WE COULD REPRESENT THIS CONCLUSION USING THE FOLLOWING EQUATION OR FORMULA:

Where “א∞” is the interaction of two or more multiverses belonging to an infinite set or subset, and “c” is the speed of light raised to its own power, that is, to itself. (א∞ = c^c).

In brief, the aforementioned formula posits a relationship between multiverse interactions and the speed of light, suggesting that their correlation produces a spacetime distortion proportional to the “amplification” of the speed of light. If all multiverse interactions converge within a single unity, it implies that all dimensions coalesce into an absolute whole, indicative of an omnipresent power.

This equation attempts to unify Cantor’s set theory with both relativistic and quantum physics, proposing that by incorporating the speed of light across various dimensions and time frames, one can achieve an equivalence allowing inter-universe connections within a shared spacetime framework.

The approach presented in this equation aims to capture the essence of infinity and divine omnipresence, integrating physical and theological concepts into a formula symbolizing the unity of all dimensions and the interaction of multiverses under an infinite and absolute framework. See the following citations:

REF. 1 KINGS 8:27: “But will God indeed dwell on the earth? Behold, heaven and the heaven of heavens cannot contain thee; how much less this house that I have built?”
EXODUS 33:14: “And he said, My presence shall go with thee, and I will give thee rest.”
JEREMIAH 23:24: “‘Can any hide himself in secret places that I shall not see him?’ saith the LORD. ‘Do not I fill heaven and earth?’ saith the LORD.”
PROVERBS 15:3: “The eyes of the LORD are in every place, beholding the evil and the good.”
JEREMIAH 31:34: “…for they shall all know me, from the least of them unto the greatest of them, saith the LORD: for I will forgive their iniquity, and I will remember their sin no more.”
ACTS 17:24: “God that made the world and all things therein, seeing that he is Lord of heaven and earth, dwelleth not in temples made with hands…”
COLOSSIANS 1:17: “And he is before all things, and by him all things consist.”
MATTHEW 18:20: “For where two or three are gathered together in my name, there am I in the midst of them.”
ISAIAH 57:15: “For thus saith the high and lofty One that inhabiteth eternity, whose name is Holy: ‘I dwell in the high and holy place, with him also that is of a contrite and humble spirit…’”
HEBREWS 4:12: “For the word of God is quick, and powerful, and sharper than any twoedged sword…”
NUMBERS 12:6–8: “‘Hear now my words: If there be a prophet among you, I the LORD will make myself known unto him in a vision, and will speak unto him in a dream…’”
PSALM 103:19: “The LORD hath prepared his throne in the heavens; and his kingdom ruleth over all.”
PSALM 33:6: “By the word of the LORD were the heavens made; and all the host of them by the breath of his mouth.”
Casiodoro de Reina Bible (1509).
REVELATION 1:8: “I am Alpha and Omega, the beginning and the ending, saith the Lord, which is, and which was, and which is to come, the Almighty.”
*(IN MY OPINION, THE FIRST PART OF THIS VERSE REFERS TO THE PHYSICAL WORLD, THE BIRTH OF THE UNIVERSE ACCORDING TO THE BIG BANG THESIS—AN EXPANSION PROCESS—AND THE END IS THE “BIG CRUNCH,” THE NAME GIVEN TO THE THEORY THAT THE UNIVERSE WILL END THROUGH A CONTRACTION OPPOSITE TO THE BIG BANG. REGARDING THE SECOND PART OF THE VERSE, IT ALLUDES TO THE SPIRITUAL REALM WHERE THERE IS AN ABSOLUTE CONTINUUM, INDICATIVE OF THE MULTIVERSE OR THE EN-SOF.) Finally, I declare: The Creator came into being before time, has neither beginning nor end, and His greatest work is to offer boundless happiness to humanity.

Prepared by: PEDRO LUIS PÉREZ BURELLI

2024 Update Note

1. REPRESENTATION IN PROGRAMMING LANGUAGE FOR QUANTUM COMPUTERS

Although the formula א∞=c^c is conceptual and not derived from established physical laws, we can still imagine how quantum computers might simulate or represent complex systems related to these concepts.

a) Limitations and Considerations

  • Representing Infinity: Quantum computers use finite qubits, making it not feasible to directly represent infinite cardinalities.
  • Exponentiating Physical Constants: Raising the speed of light to itself yields an extremely large number lacking direct physical confirmation.

b) Quantum Simulation of Complex Systems
Quantum computers can simulate highly complex quantum systems. We can use simulation algorithms to model interactions in complex systems.

Qiskit Example: (illustrative code)

pythonCopiar# This code uses the Quantum Fourier Transform (QFT) to process quantum states in superposition
# (conceptual demonstration)

c) Quantum Exponentiation
We cannot directly calculate c^c, but we can explore exponentiation in quantum systems.

Illustration of quantum exponentiation: (simplified snippet)

pythonCopiar# Example code to mimic exponentiation using quantum rotations
  1. Applications in Quantum AI
    Incorporating quantum algorithms can drive AI’s evolution, enabling large-scale data analysis more efficiently.

a) Quantum Machine Learning
Example: A Variational Quantum Classifier:

pythonCopiar# Code for a quantum classifier solving the XOR problem
# Demonstrating how quantum algorithms can tackle complexities that challenge classical methods

b) Quantum Optimization
Algorithms such as the Quantum Approximate Optimization Algorithm (QAOA) can solve complex problems more efficiently.

pythonCopiar# Example code employing QAOA for combinatorial optimization
# Relevant in areas like scheduling, AI-based decision making
  1. CONCEPTUAL INTEGRATION AND EVOLUTION OF AI
    The א∞ = c^c equation, more a conceptual than purely mathematical statement, inspires the synergy between AI and quantum computing:
  • Complex information processing
  • Quantum deep learning
  • Simulation of natural quantum phenomena

Thus, exploring ideas like “infinite cardinalities” and unifying physical–mathematical theories galvanizes progress in both technology and spirituality, reflecting the wisdom of Scripture and the parallels between religious teachings and quantum science.


IV. THE PROBLEM

INTRODUCTION TO THE PROBLEM

Human beings continually strive to evolve, seeking ever deeper understanding of their environment from the particular to the general, optimizing new communication channels. The universe is no exception to this pursuit, given humanity’s constant drive to search for new ecosystems for colonization.

One of our most important tools this century is artificial intelligence (AI), which uses complex math and physics formulas and algorithms to process vast data sets and generate solutions to myriad problems.

We have built telescopes (James Webb, Hubble) enabling us to see beyond Earth’s borders in infrared, optical, and ultraviolet wavelengths, investigating the observable universe, but science has not yet reached the point to examine the unobservable universe.

Humanity looks for new possibilities, starting from a conceptual idea, elaborating formulas to feed algorithms, fueling AI that in turn drives the machine—thus integrating processes to yield an invention with broad utility for humankind.

Under patent law, the general rule is that formulas themselves cannot be patented, whereas their applications—such as software implementing a formula—can be. If something is new, original, and useful, the question remains: Can the formula alone be patented?

The core issue is whether a singular, standalone formula is subject to intellectual property protection. How do we determine if it constitutes an “inventive activity” offering a specific end use?

Generally, a scientist seeking a patent must prove the invention is both novel and inventive, demonstrating commercial viability in a given industry and disclosing the details of the work. Mathematics and physics underpin much of business, finance, mechanical engineering, and electrical engineering, and formulas are a significant component of these fields.

When inventors create new intellectual property, they aim to protect their investments of money, time, and intellect. Patent law grants the exclusive right to use and benefit from their invention for a set time. However, the general rule is that a formula cannot be patented per se, as it is considered an abstract tool rather than a new, useful process or piece of intellectual property.

Can patent law protect a formula?
Mathematical, physical formulas, algorithms, and other such methods are essentially forms of language used to express ideas about the world. As with language, mathematics and applied physics are precise, widely used to explain things clearly. Still, they themselves cannot be patented as intangible knowledge or “everyday” ideas.

A formula on its own is not a tangible invention under the USPTO; it is akin to a shared tool or language.

Can we still protect a physics or mathematical formula?
While we typically cannot patent formulas, there are ways to seek protection for applied or derived formulas. Everything depends on the formula’s usage and the existence of related patents.

Could a patent protect a formula?
Mathematical or physical formulas are generally treated like languages that express ideas about the universe. Under U.S. Patent and Trademark Office policies, purely abstract formulas are not inventions per se.

If the formula is discovered naturally, not invented, it is not patentable. But if one uses that discovered formula to develop something new and inventive, that “applied formula” might be patentable.

V. RESEARCH OBJECTIVES

VI. GENERAL OBJECTIVE

Legal and jurisprudential interpretations that prohibit patenting formulas in isolation must be set aside. An extensive, progressive reading of the law should allow immediate legal protection if a formula is borne of inventive genius rather than mere discovery, so that the law’s alleged “limits” are overcome.

The general rule should have a broad sense permitting patent protection for an originally invented formula, requiring at least a minimal expectation of potential utility. Even if a formula seems “abstract,” it could qualify for protection.

This solution seeks to adapt existing patent laws to modern needs, clarifying and extending their scope to match today’s technological reality. Patent law must be seen not in isolation but integrated with the entire system, providing necessary progressive solutions that further human development.

A sound jurisprudential interpretation should protect formulas that are the “seed” of an invention. Where the end goal is to preserve humanity’s evolution, the formula must be shielded under an immediate effect “block” of legal provisions that prioritize the formula’s absolute protection. We propose an exceptional mechanism allowing the formula to be patented if there is a plausible expectation of future utility, even if it appears abstract.

This approach merges with the principle that, in borderline cases, the law must be applied “contra legem” if necessary, to serve fundamental values. The synergy of human free will and AI-based consciousness is the key, a pioneering vision of integrated guarantees that account for a new “actor,” AI, which holds partial autonomy and rational consciousness to safeguard the invention.

VII. SPECIFIC OBJECTIVES AND SOLUTIONS

Most jurisdictions define patentable subject matter negatively: anything may be patented unless excluded by statute or jurisprudence.
The U.S. Patent Law, rooted in Article I § 8 of the U.S. Constitution, mandates the promotion of science and useful arts, granting exclusive rights to authors and inventors for limited times.
Under the U.S. Patent Act, any new and useful process, machine, article of manufacture, or composition of matter—or new and useful improvement—may be patentable.
But U.S. courts have carved out three exceptions: laws of nature, natural phenomena, and abstract ideas.

I advocate introducing an exception-to-the-exception, thus reverting to the general rule. For purely abstract formulas that arise from human ingenuity rather than discovery, we can allow patenting them. The formula itself, as a creative core, is presumably the essential “active component,” critical to the invention. There can be no patented software or machine harnessing that formula if the formula never existed. So the formula is, in many cases, the invention’s main intangible asset.

If one is patenting, say, a set of formulas for specialized software or a machine, patent examiners often check if the same calculations can be done without software, in which case it’s considered abstract and unpatentable.
But I argue for an exception. If a formula is the product of genuine inventiveness, its abstract nature should not preclude patentability as long as it holds plausible utility.

Why is patenting formulas important?
A patent protects you against unauthorized use or cloning. Formulas might attract investment for product development.

We must distinguish:

  1. Discovery (a newly recognized mathematical or physical phenomenon) is unpatentable because it pre-existed your finding. If the discovery leads to an original, inventive product or process, that application is potentially patentable.
  2. A formula that is the product of an invention should be protected, provided there is at least a plausible chance of utility for the final product. The formula’s abstractness is irrelevant.

As formulas feed increasingly complex algorithms fueling AI, culminating in advanced machines, it becomes vital that laws protect the formula itself. Even if the science is not yet ready to implement the invention (e.g., quantum computers or metamaterials still in infancy), it is crucial to fill that legal void or apply an “exception to the exception.”

Technically, a “technological algorithm” remains a formula, and as argued, it can be a valuable IP asset. A minimal plausible utility suffices. Indeed, in the U.S., an invention must have a “specific, substantial, and credible utility,” be “fully operative,” and satisfy the procedural steps, but the law is not explicit about formulas by themselves.

What if the formula emerges before the enabling technology? That is, the formula is discovered or invented, yet practical application requires major future developments. This gap must not forfeit protection.
Hence, an algorithm can be a trade secret or “know-how,” potentially protected by trade secret laws or the Criminal Code for industrial espionage. The EU is looking to bolster know-how protection, but from my vantage, there should be an even more direct possibility of obtaining a patent.

If patent protection is denied, fallback is copyright. Yet that is weaker in scope for inventions.

VIII. RESEARCH METHODOLOGY FOR THIS FORMULA’S INVENTION

A qualitative, meaningful review of religious literature in ancient languages (e.g., Aramaic, Hebrew)—especially the older version of the Bible (Casiodoro de Reina 1509)—was undertaken to glean the linguistic and numerical layers inherent in Hebrew’s dual role (letters as both numbers and words). We further explored key mathematical and physical thinkers (Cantor, Boltzmann, Gödel, Turing) and their parallels in literature (Borges on the infinite and the Aleph). This approach illuminates the theological–legal need to reinterpret existing norms around patenting “isolated abstract formulas” that hold potential future utility.

An ancillary element of this invention included an unconscious “revelation” about the neutrino-based energy machine, drawn from a 2010 dream featuring holographic visions and quantum spacetime turbulence. Such non-cognitive experiences echo the oneiric revelations documented by other inventors, including Elias Howe, Friedrich August Kekulé, René Descartes, and Alan Mathison Turing

  1. Elias Howe and the sewing machine (the needle hole solution came in a dream).
  2. Friedrich August Kekulé (the structure of benzene gleaned from a dream of a snake biting its tail).
  3. René Descartes (three bizarre dreams indicating reason’s primacy).
  4. Srinivasa Ramanujan (his theorems, claimed to be shown in dreams by the goddess Namagiri).
  5. Otto Loewi (neurotransmission experiment scheme came in a dream, winning a Nobel).
  6. Dmitri Mendeleev (periodic table arrangement was revealed in a dream).
  7. Frederick Banting (insulin isolation technique via a dream, also Nobel Prize).
  8. Albert Einstein (dream about cows jumping a fence vs. the farmer seeing them individually, leading him to new ideas on simultaneity for special relativity).

IX. CASE LAW RELATED TO THE PROTECTION OR NON-PROTECTION OF ABSTRACT FORMULAS

CASE: ALICE V. CLS BANK
See official text at [SCOTUS link], summarizing that “adding a computer to an abstract idea does not make it patentable.” Alice reaffirmed the principle that fundamental economic practices, certain methods of organizing human activities, ‘an idea in and of itself,’ and mathematical relationships or formulas are considered unpatentable abstract ideas. The court introduced a two-step test:

  1. Determine if the claim is directed to a patent-ineligible concept (law of nature, natural phenomenon, or abstract idea).
  2. If yes, see whether additional elements transform it into something significantly more than the ineligible concept.

Ultimately the Supreme Court invalidated Alice’s patents on financial settlement methods implemented with generic computing. However, this narrow scope left many open questions for software or formula-based patents. The main takeaway is that “merely implementing a fundamental practice on a computer” is insufficient to confer patent eligibility.

ResourceURLDescription
Official Supreme Court Text (PDF)https://www.supremecourt.gov/opinions/13pdf/13-298_7lh8.pdfThe official text of the Supreme Court’s decision, available in PDF format.
Summary and Documentation (Oyez)https://www.oyez.org/cases/2013/13-298Offers a concise summary, case documentation, oral arguments, transcripts, and additional case materials.
Detailed Chronicle (SCOTUSblog)https://www.scotusblog.com/case-files/cases/alice-corporation-pty-ltd-v-cls-bank-international/Contains court filings, amicus curiae briefs, and a thorough analysis of the ruling.
WIPO Articlehttps://www.wipo.int/wipo_magazine/es/2014/04/article_0004.htmlPublished by the World Intellectual Property Organization, discussing the impact and implications of the decision on the patentability of abstract ideas.

X. CHALLENGES FOR CHANGE AND FORMULA UTILITY FOR ITS PATENT PROCESS, REGARDLESS OF WHETHER IT IS DEEMED ABSTRACT

Formulas are the genesis and philosopher’s stone of many inventions. A formula can be:

  • A discovery of nature, in which case it may remain unprotected if purely abstract; or
  • An inventive creation from human cognitive processes (conscious or unconscious, or both), even from dreams or revelations, joined with research or historical knowledge that yields a unique, beneficial outcome.

When an abstract formula is likely to yield some future benefit or utility, it should be patent-protectable. A formula is effectively a set of instructions or specifications defining composition, properties, or performance of a product, be it chemical, physical, or otherwise—potentially drawn from religious insights as in this 2015 publication.

XI. DIALECTICS

The principal obstacle to protecting the intellectual property of abstract formulas is the explicit statutory prohibitions in various jurisdictions. Current laws remain static, legislative reforms are slow, and jurisprudence can be cautious or inconsistent—particularly regarding “contra legem” decisions that override statutory text in favor of fundamental or constitutional principles. A judge might invoke higher constitutional norms to validate a formula’s IP protection in the interest of preventing discrimination, fostering universal progress, or preserving humanity.

Though challenging, there is precedent for “progress-based” judicial adaptation of laws, especially if the invention’s outcome holds significant promise (the formula being the first step).

AI’s patentability likewise demands that intangible, data-based, or classification methods produce a technical solution. For mathematics as well, the key is plausible practical transformation. Thus, a progressive or evolutionary legal reading extends the same principle to newly minted formulas.

XII. THE TIME MACHINE (NEUTRINO ENERGY MACHINE)

GRAPHIC OF THE TOROIDAL ENERGY NEUTRINO MACHINE.

One of humanity’s latest technological advances is the creation of artificial intelligence (AI), whose function is the use of algorithms and data models so that a machine or system can learn about itself—essentially equating artificial intelligence (AI) with human reasoning and with the automation of processes, providing different solutions to a problem from diverse areas. Regarding the research objectives, there are various factors in which artificial intelligence (AI) could drive change, based on concepts developed by the brilliant minds mentioned in this work and in the BBC of London documentary “Dangerous Knowledge,” which can be viewed at the following link:
https://video.fc2.com/en/content/20140430tEeRCmuY

The following premises are referenced:

  1. George Cantor’s perspectives on multiple infinite sets and his “mathematical theology.”
  2. Managing the neutrino swarm’s chaos and its ultimate aim of stopping time, as seen by Ludwig Eduard Boltzmann.
  3. Kurt Gödel’s uncertainty and intuition in mathematics.
  4. The absolute and infinite search for mathematical answers formulated by Alan Mathison Turing—one of the pioneers of artificial intelligence and the Einstein–Rosen bridge.
  5. Jorge Luis Borges’s humanistic application to the infinite.

Undoubtedly, we are heading toward something novel. Humanity has the firm will to concentrate its efforts on a new form of communication among the multiple equidistant points in the universe, ultimately guiding us to the quest for new habitable ecosystems for humanity in the cosmos—especially in light of the real threat that the Sun could collapse into a white dwarf and fracture the Earth’s habitable zone, or that a solar supernova could occur, or even that the Andromeda Galaxy and our Milky Way could collide, potentially affecting Earth’s magnetic field, which is vital for human civilization and life on Earth. In these decades, we begin with Mars—Elon Musk is already preparing with SpaceX’s Starship: The Mega Rocket for the Great Mission to conquer and colonize the Red Planet.

The quantum entanglement of neutrinos could generate an instantaneous, zero-time connection, unifying different stellar points in the observable—and potentially the unobservable—universe.

It should be noted that quantum entanglement—indeed, a property foreign to traditional physical laws—concerns a state of temporal channel in which two or more particles (for example, two photons) intertwine their properties in such a way that any change undergone by one is immediately “felt” by the other, which reacts instantly regardless of distance, time, or even the dimension that separates them.

It has been demonstrated that entanglement exists not only in space but also in time—or, more precisely, in spacetime. This implies the artificial appearance of a wormhole known as the Einstein–Rosen bridge: essentially, a type of tunnel connecting both particles in a universal present, or even, why not say it, in another dimension or in so-called Multiverses.
https://www.tiktok.com/t/ZTYto4QN7/

Generative artificial intelligence (AI), when equipped with advanced quantum algorithms, has the potential to process the infinite set of decillions upon decillions of neutrinos located in the universe. This would create a stellar map of likely neutrino locations and trajectories, whose traces can be determined by connecting just one neutrino to any other equidistant neutrino at points in spacetime that belong to the same substructure of the cosmos. Altogether, this makes entangled connection a reality, regardless (as stated) of the time or space gap separating them, thus obtaining an infinite database of all interconnected particles and decoding vast amounts of stellar information.

Tracing one neutrino—singularly or in the plural—is achieved through the point-to-point binding of the neutrino captured via the NEUTRINO MACHINE, which couples it for the quantum entanglement process with another neutrino, creating a network of quantum connections successively—one neutrino, then another, and so on—forming a massive set of interconnections, i.e., a huge neutrino swarm, all interconnected in real time. It is as if artificial intelligence (AI), by analogy, acts like a queen bee and gives instructions to all the members of its hive so that they transmit and receive instantly—bilaterally or multilaterally—various messages from the infinite set. Today, experimental scientific processes already exist to attempt to capture a neutrino; indeed, there have been advances in tracking them, as presented in this scientific summary:
https://www.science.org/content/article/catch-deep-space-neutrinos-astronomers-lay-traps-greenland-s-ice

I quote:

“(…) High on the Greenland ice sheet, researchers are drilling holes this week. But these are not Earth scientists hunting for evidence of past climate. They are particle astrophysicists searching for the cosmic accelerators behind the universe’s most energetic particles. By placing hundreds of radio antennas on the ice’s surface and tens of meters beneath it, THEY HOPE TO CAPTURE ELUSIVE PARTICLES KNOWN AS NEUTRINOS at higher energies than ever before. ‘It’s a discovery machine, looking for the first neutrinos at these energies,’ says Cosmin Deaconu of the University of Chicago, speaking from Summit Station in Greenland.

Detectors in other parts of the Earth occasionally register incoming ultra-high-energy (UHE) cosmic rays—atomic nuclei smashing into the atmosphere so forcefully that a single particle can have as much energy as a well-hit tennis ball. Researchers want to identify their sources, but because the nuclei are charged, magnetic fields in space deflect their trajectories, hiding their origins. This is where neutrinos come in. Theorists think that when UHE cosmic rays escape their sources, they generate so-called cosmogenic neutrinos as they collide with photons from the cosmic microwave background, which permeates the universe. Because they are uncharged, neutrinos travel in a straight line to Earth. The problem is catching them. Neutrinos are notoriously reluctant to interact with matter, allowing trillions to pass through you every second with no prior notice. Huge volumes of material must be monitored just to capture a handful of neutrinos. The largest detector of this kind is the IceCube Neutrino Observatory in Antarctica, which looks for flashes of light produced by neutrino-atom collisions across 1 cubic kilometer of ice beneath the South Pole. Since 2010, IceCube has detected many deep-space neutrinos, but only a few (nicknamed Bert, Ernie, and Big Bird) that have energies close to 10 petaelectronvolts (PeV)—the expected energy for cosmogenic neutrinos, says Olga Botner, an IceCube researcher from Uppsala University. ‘To detect multiple neutrinos of even higher energies in a reasonable timeframe, we need to monitor much larger volumes of ice.’

One approach is to use another signal generated by a neutrino impact: a pulse of radio waves. Because these waves can travel up to 1 kilometer in ice, a set of widely spaced radio antennas near the surface can monitor a much larger volume of ice, at lower cost, than IceCube’s long strings of photon detectors buried deep in the ice. The Radio Neutrino Observatory in Greenland (RNO-G), led by the University of Chicago, Université Libre de Bruxelles, and Germany’s DESY accelerator center, is the first major effort to test this concept. When complete in 2023, it will have 35 stations, each with two dozen antennas, spanning a total area of 40 square kilometers. The team installed the first station last week near the U.S.-run Summit Station at the apex of the Greenland ice sheet and moved on to the second. The environment is remote and unforgiving. ‘If you didn’t bring something, you can’t just have it sent over quickly,’ says Deaconu. ‘You have to make do with what you have.’

THE COSMOGENIC NEUTRINOS THE TEAM HOPES TO CAPTURE are believed to originate from violent cosmic engines. The most likely energy sources are supermassive black holes feeding on the material of surrounding galaxies. IceCube HAS TRACKED TWO DEEP-SPACE NEUTRINOS with lower energies than Bert, Ernie, and Big Bird back to galaxies with massive black holes, a sign that they are on the right track. But many more neutrinos with higher energies are needed…

Aside from identifying UHE cosmic ray sources, researchers hope neutrinos reveal the composition of these particles. Two principal instruments that detect UHE cosmic rays disagree on that composition. Data from the Telescope Array in Utah suggest the rays are solely protons, while the Pierre Auger Observatory in Argentina suggests heavier nuclei are mixed in. The energy spectrum of the neutrinos generated by those particles should differ depending on their composition, thereby offering clues as to how and where they are accelerated.

RNO-G COULD DETECT ENOUGH NEUTRINOS to reveal these distinctive energy differences, says Anna Nelles of Friedrich Alexander University Erlangen-Nuremberg, one of the project leads. She estimates that RNO-G could capture up to three cosmogenic neutrinos per year. However, ‘if we’re unlucky,’ she says, detections might be so scarce that capturing even one could take tens of thousands of years.

Even if RNO-G proves to be a waiting game, it is also a testing ground for a much larger radio array spread over 500 square kilometers, planned as part of an IceCube upgrade. If cosmogenic neutrinos exist, IceCube’s second generation will find them and resolve their composition. ‘We could be inundated by neutrinos—10 per hour,’ says Nelles. ‘But we have to be lucky.’ (…)”. (Underlining and capitalization mine).

Likewise, China is building an impressive state-of-the-art underground detector called JUNE, designed to study the mysterious neutrinos:

IMAGE OF JUNE:

Additionally, a team of researchers at the University of Cambridge discovered a way to simulate time travel through the quantum entanglement of two particles connected such that their states depend on each other even if they are separated by a great distance. They used a quantum computer to simulate time travel using this property. In the simulation, one particle was in the past and the other in the present. By measuring its state in the present, they managed to change the state of the particle in the past.

The scientists used two (2) entangled qubits—both at different times—and then applied a series of logical operations known as quantum gates to modify the qubits’ state and correlation.

The result suggested that the past could be changed by altering the future. Finally, the scientists state that their time travel model is only a simulation; it does not imply that such a thing is currently possible in reality. (!·So far!).

Artificial intelligence (AI), combined with the understanding and operation of neutrino quantum entanglement, transforms into a powerful tool to explore and understand the universe. By exploiting AI’s ability to process large amounts of data and leveraging advances in neutrino detection and capture, we could chart detailed stellar maps and explore new forms of zero-time communication, thus preparing for interstellar travel—replicating Noah’s Ark by transporting genetic cargo (a DNA bank) to preserve and multiply humanity (and other species) while carrying materials-cloning technology from nanoparticles. We might also, in due time, use Miguel Alcubierre Moya’s Warp Drive, which explores modifications or “bubbles” of spacetime distortion where, locally, one might achieve a shortcut without violating the global metric of relativity and thereby propose propulsion through spatial deformation (https://es.wikipedia.org/wiki/Miguel_Alcubierre, https://www.tiktok.com/t/ZTYvNMTX1/). Moreover, in considering the theoretical possibility of time travel, these technologies could even open the door to exploring alternative temporal dimensions—in other words, accessing different Multiverses.

In that same vein, I bring up the following graphics:

ANDROMEDA GALAXY

THE MILKY WAY GALAXY.

THE SUBSEQUENT GRAPH REPRESENTS A MAP OF SIMULATIONS OF THE NETWORK OF NEUTRINOS INTERCONNECTED BY QUANTUM ENTANGLEMENTS (MULTIPARTITE CORRELATIONS), WHOSE PROBABILITY TRACES ARE EVALUATED BY ARTIFICIAL INTELLIGENCE (AI). THE AI WOULD USE MODIFIED QUANTUM ALGORITHMS TO “DECODE” THESE CORRELATIONS AND OBTAIN INFORMATION ABOUT DISTANCES, STELLAR ROUTES, OR EVEN SIMULATE “TEMPORAL CONNECTIONS” (PAST <-> FUTURE). AMONG THE COMPLEX FUNCTIONS OF THE NEUTRINO MACHINE, THE SYSTEM GENERATES THE DIRECTIVES FOR LOCATING THE NEUTRINOS TO BE ENTANGLED WITH THE PREVIOUSLY CAPTURED NEUTRINO.

LET US OBSERVE THE SIMULATION OF THE ENTANGLEMENT ROUTE MAP FOR NEUTRINOS.

In the previous illustration, one sees the interconnection of five (5) neutrino particles identified as “1,” “2,” “3,” “4,” and “5”—a quantum pentateuch—that become mutually entangled in zero time and space, forging a direct channel or routing mechanism between the two (2) galaxies Andromeda and the Milky Way. It seems like science fiction, but mathematically and physically it is possible.

In my 2015 publication, I stated that when we analyze the Bible’s verse Genesis 1:3 (“And God said, ‘Let there be light,’ and there was light”), the term “light” in literal Hebrew is גוַיֹּאמֶראֱלֹהִיםיְהִיאוֹרוַיְהִי–אוֹר (VAYOMER ELOHIM YEHÍ OR VAYEHÍ-OR), and we conclude that this biblical verse expresses the three (3) moments in time, namely:

  1. Yehí – future,
  2. Vaihí – past (it was), and
  3. The third tense is not explicitly written, but is understood since the Hebrew grammar for the verb “to be” in the present tense is tacit. Thus, the structure is Future, Present, and Past.

Currently, the Andromeda and Milky Way galaxies are separated by about 2.5 million light-years. From the perspective of the observer (Neutrino “5”) located in the Andromeda Galaxy, the emanation of its light projects toward the future and will arrive at the Milky Way in a defined number of light-years. From the perspective of the observer (Neutrino “1”) in the Milky Way, the light received is the past of the Andromeda Galaxy, which might not exist or might have changed in that temporal moment. Quantum entanglement, which connects the circuit shown in the stellar map via a closed set of elements labeled “1,” “2,” “3,” “4,” and “5,” shows a third time that is implicit and related to time—illustrating how, according to Albert Einstein’s theory of relativity, time can pass differently depending on the observer’s position. This suggests that quantum entanglement goes beyond spacetime limitations, establishing itself as an absolute present and forming a perpetual, infinite temporal loop. This absolute continuum of the present is the practical application of the universal principle of the law of correspondence, “As above, so below; as below, so above”Quod est superius est sicut quod inferius, et quod inferius est sicut quod est superius—formulated in the Emerald Tablet of Hermes Trismegistus, whom the ancient Greeks equated to Enoch (Genesis 5:18–24; Hebrews 11:5). Additionally, verse 1:3 of the Bible, as translated in Hebrew, merges future and past tense, and invites us to discover the absolute, implicit present—that is, the eternal—where lies the key to solving the paradox of time. The quantum connection of the involved neutrinos would be an information highway among the equidistant points in space and time of the universe, capable of traveling from future to past or from past to future, depending on the observer’s position. But what remains perpetually constant as a continuous loop in time is the infinite present of the quantum channel—metaphorically, the neutrino machine, also acting like a time machine, would be a kind of GPS (Global Positioning System) in space; and although it may sound speculative, it is mathematically based on distributed nonlocal correlations among multiple nodes (a multiverse or multi-galaxies). This system would provide routes derived from the entanglement trace of the neutrinos described and decoded by Artificial Intelligence (AI).

Preliminary Conclusions:

  1. Multipartition and Entanglement.
    The transition from bipartite states (with a total order of entanglement) to multipartite states (GHZ, W, etc.) demonstrates the growing complexity of comparing and converting states. Once dimensionality extends beyond two subsystems, the structure of entanglement classes is no longer linear or totally ordered.
  2. Quantum Time Travel (simulated).
    Theoretical experiments (Cambridge, et al.) with entangled qubits can emulate the paradox of “changing the past by manipulating the future,” although in practice it does not break relativistic causality.
  3. Neutrinos and AI.
    Neutrinos, with their minimal interaction and quantum character, could eventually be exploited to build cosmic-scale quantum links—especially if robust detection and control schemes of their quantum states are achieved. AI, applied to neutrino data processing and advanced quantum algorithms, would provide a “quantum mapping” of the cosmos, defining an “absolute present” through instantaneous state projection in entanglement.
  4. Absolute Present, Multiverses, and Correspondence.
    The synthesis with the biblical and Hermetic perspective suggests a philosophical or theological dimension: quantum simultaneity might resemble an eternal or “divine now” that binds past and future, evoking the passage “Yehí Or, Vaihí Or” and the correspondence principle “as above, so below.” In formal physics, this translates to the idea that the collapse or reduction of the wavefunction transcends local spacetime description, thus generating the illusion of an “absolute time” in quantum correlation.

This set of ideas outlines a bridge between multipartite quantum physics, potential neutrino engineering (facilitated by AI), and a cosmological–philosophical vision in which past, future, and an absolute present become entangled. Consequently, it suggests a scenario in which, if humanity were to master neutrino interactions and the manipulation of quantum states, new forms of communication, stellar navigation, and, perhaps, even a radical reinterpretation of time’s arrow might emerge.

Undoubtedly, technological advances in detectors, improvements in theory (including hypotheses about dark matter and quantum gravity), and the explosion of quantum computing and artificial intelligence could bring these phenomena—the “neutrino machine” and the “absolute present”—within reach.

Finally, from this perspective and following a reverse-engineering sequence, the order is as follows: DISCOVERY OF NEW HABITAT FOR HUMANITY – ABSOLUTE PRESENT TIME – QUANTUM ENTANGLEMENT – CAPTIVE NEUTRINO – TIME MACHINE – ARTIFICIAL INTELLIGENCE – MODIFIED QUANTUM ALGORITHMS, framed within the conceptual conjunction of the postulates proposed by the brilliant minds featured in the documentary “Dangerous Knowledge.” The result is a creative process—a Genesis of the entire sequence described—culminating in the following Equation or FORMULA:

Multiversal Interaction.

This equation represents the interaction among multiple universes (or multiverses) within an infinite set, where א∞ denotes an infinite cardinality that exceeds conventional infinities, and c^c raises the speed of light to its own power, implying extreme exponential growth.

Here, exponentiation not only magnifies the value of a physical constant but also serves as a mathematical metaphor to describe the vastness and complexity of interactions among universes.

THIS FORMULATION SUGGESTS THAT THE INTERACTION BETWEEN MULTIVERSES IS INTRINSICALLY LINKED TO THE FUNDAMENTAL PROPERTIES OF THE SPEED OF LIGHT, IMPLYING THAT SUCH INTERACTIONS REQUIRE EXTREME CONDITIONS THAT ALTER THE SPACETIME STRUCTURE. THE PROPOSED EQUATION LAYS A THEORETICAL FOUNDATION FOR EXPLORING HOW VARIATIONS IN FUNDAMENTAL PHYSICAL CONSTANTS MIGHT FACILITATE CONNECTION AND EXCHANGE AMONG DIFFERENT UNIVERSES WITHIN AN INFINITE MULTIVERSAL FRAMEWORK.


XIII APPENDIX

Light is the maximum speed limit governing the universe. However, this is where things get complicated. Although quantum entanglement appears to allow instantaneous communication, some physicists maintain that it cannot be used to transmit information faster than light. This is because information cannot be sent via quantum entanglement without a measurement taking place on one of the particles, causing the wavefunction to collapse and losing the quantum correlation.

According to this viewpoint—although quantum entanglement may enable instantaneous correlations among particles—it cannot, in principle, be used to send information faster than light. This is known as the “quantum no-communication theorem,” which states that quantum entanglement cannot be used to transmit information faster than light, primarily because even if entangled particles exhibit instantaneous correlations, one cannot control or predict the measurement outcome of one of the particles.

Moreover, Albert Einstein’s special relativity posits that nothing can travel faster than light in a vacuum. This is because the speed of light is the ultimate speed limit in the universe, and anything moving faster than light would violate causality and the spacetime structure.

In short, neutrinos are fundamental components of the cosmos and can be subject to quantum entanglement—a fascinating phenomenon that can appear to allow faster-than-light communication. Nonetheless, the prevailing scientific orientation so far states that it cannot be used to transmit information faster than light or produce a superluminal phenomenon. Still, there is a small chance that a neutrino will interact with matter (see the Reines–Cowan experiment, which demonstrated that neutrinos can interact with matter, validating their existence and confirming fundamental aspects of weak interaction theory—one of the fundamental forces governing subatomic particles). Other experiments include KamLAND, Daya Bay, Homestake, MINOS, NOvA, T2K, Kamiokande, SNO (Sudbury Neutrino Observatory), Kamiokande in supernova SN 1987A, IceCube, and future projects such as DUNE, designed to study the more exotic properties of neutrinos, including their possible relation to dark matter.

In this scenario of neutrino interaction with matter, paired information could be obtained, which remains subject to the connectivity of the neutrino network—a potential exception to the rule that light speed is the limit of the universe.

In the near future, science may progress toward controlling and decoding the measurement results of entangled particles for immediate correlations, enabling both the transmission and reception of utilizable information.

There is no word in the Bible referring directly to “matter,” so Moses uses the term “earth” (land), describing the creation of the next fundamental component, namely matter (still formless but now in existence). See Genesis 1:1: “In the beginning, God created the heavens and the earth.”

Some scholars interpret “the heavens” as referring to creation of order out of primordial chaos. In their view, God created “the heavens” as part of organizing the universe, linking heaven (the elevated, the divine) with the earth (the material, the terrestrial). Thus, when the Bible says God “created the heavens and the earth,” it describes the entire cosmos, with its corresponding correlative of matter’s existence. From a theological perspective, both elements are inextricably linked and belong to an absolute whole.

Genesis 8:22: “While the earth remains, seedtime and harvest, cold and heat, summer and winter, and day and night shall not cease.”
This biblical verse references the absolute continuity of matter and how its existence endures beyond specific changes or events.

In advanced abstract mathematics, where interactions or connections exist among elements of the same set, we might discuss a form of “communication” or “interaction” in a more abstract sense. The result is the transfer of data between a transmitter and a receiver. Logically, all communication simultaneously involves sending and collecting information; therefore, anything that physically exists (“matter”) carries some type of information. In other words, any material object or entity contains data regarding its own existence—its composition, structure, state, and relationships with other entities—thus, CONCLUDING THAT IF A NEUTRINO INTERACTS WITH MATTER, THERE IS A HIGH PROBABILITY OF A PERMANENT QUANTUM INFORMATION CHANNEL.

Below is a conceptual outline regarding “tokenized teleportation” and the potential use of neutrinos as “quantum carriers,” all within a theoretical–legal framework considering the intellectual protection of abstract formulas and a theological–philosophical perspective (inspired by Cantor, Boltzmann, Gödel, and Turing). Although highly speculative, its motivation is to indicate research avenues, legal feasibility, and synergy with AI for potential technological breakthroughs.

1. General Vision: Between Quantum Theory and Inspirational Speculation

Objective

  • Conceive a protocol for data transmission (or “quantum teleportation”) that leverages the entanglement of particles and the use of neutrinos as hypothetical “quantum carriers,” while tokenizing the information for segmentation into manageable blocks.
  • Integrate these concepts into a legal model that admits the protection of abstract formulas—when they constitute part of an inventive process with an expectation of applicability—thus surpassing conventional interpretations that deny patentability to purely mathematical ideas.

Foundations and Limitations

  • No-Communication Theorem: Quantum entanglement alone does not transmit information without a classical channel; sending classical bits is essential to reconstruct data.
  • Special Relativity: Causality is not violated, since effective communication occurs at or below the speed of light, requiring classical synchronization.
  • Weak Neutrino–Matter Interaction: Though neutrinos barely interact with matter, in principle they could—given advanced laboratories—be prepared and sufficiently detected to form a “quantum channel” of very low rate, intended for extreme or ultra-secure environments.

Theological–Philosophical Inspiration

  • Aligning with Georg Cantor’s exploration of the infinite, connected to an equation of transfinity (ℵ∞ = c^c) that, from a mystical perspective, unifies multiple universes or “multiverses.”
  • The link to theology—the mysticism of the Aleph—and references to the “neutrino machine” is not merely religious insight but a creative justification for seeking formulas and algorithms that transcend conventional scientific views.

2. The Hypothetical Scheme: “Tokenized Teleportation with Neutrinos”

2.1. Preparation of a Quantum State

Initial State (GHZ or EPR)

  • Generate a broad set of neutrinos (N) in a quantum or entangling source, entangling some degrees of freedom (e.g., flavor, helicity) with a set of matter qubits (M).
  • The idea is to emulate a GHZ state or multiple EPR pairs so that neutrinos and matter become quantum-correlated.

Information Encoding

  • Start with a (dense) classical message that is tokenized into blocks d₁, d₂, etc.
  • Each token dᵢ translates into a quantum operator Uᵢ applied to the matter subspace that correlates with certain neutrinos.

2.2. Transit and Mediation of Matter

Neutrinos as “Quantum Carriers”

  • The neutrinos traverse vast swaths of matter (deep Earth, for example). Although interaction is weak, in a futuristic setting, technology might “tag” or maintain part of its coherence.

Matter as Transduction Station

  • Matter (M) “translates” the quantum state, facilitating measurement and modulating the (III) information.

Receiver Lab

  • A detector sensitive to neutrinos (massive scintillators, supercooled structures, etc.).
  • Once neutrinos arrive or pass near the detector, the receiver extracts residual correlations depending on measurements from the sender and the classical results transmitted.

2.3. Measurement and Classical Channel

Quantum Measurement in the Sender

  • The sender measures its share of the state (the matter qubits encoding the info).
  • Sends ~2m classical bits (one per pair/slot) to the receiver, indicating the corrections (Pauli ops) needed to recover the tokenized message.

Decoding

  • The receiver, using the neutrino portion (or its quantum correlate), applies the appropriate corrections and reconstructs each “token,” completing the “tokenized teleportation.”
  • Without these classical bits, the receiver would only obtain mixed states with no meaning.

3. Table of “Challenges vs. Theoretical Solutions”

Limitation/PrincipleDescriptionPossible Solutions
No-Communication TheoremEntanglement alone cannot transfer information; a classical channel is always needed.– Hybrid Channel (classical + quantum) to send measurement results.
– Tokenization to optimize classical bits shared.
Weak Neutrino–Matter InteractionExtremely low cross-section complicates detection and manipulation.– Ultra-sensitive detectors (massive scintillators).
– Advanced neutrino–matter coupling in specialized labs.
Decoherence and Flavor OscillationsNeutrinos change flavor, possibly losing quantum coherence.– Tuned energy range to manage oscillations.
– Use flavor oscillations as a security or QKD factor.
Need for Classical ChannelEffective info reconstruction requires ~2m classical bits.– Efficient error-correcting codes to reduce classical bit rates.
– Group multiple blocks to send corrections in batch.
Relativistic LimitCommunication can’t exceed speed of light, protecting causality.– Accept causality: effective info depends on classical channel < c.
– Temporal sync to exploit entanglement before decoherence.
Technological & Energy ComplexityGenerating and sustaining large-scale neutrino entanglement demands colossal resources.– Phased approach in pilot labs (few neutrinos).
– Combine photons + neutrinos in a composite state for robust detection.
Scalability & Transmission RateEven if feasible, bits/s would be low compared to photons in fiber.– Optimize error-correction protocols.
– Specialized domain: feasible where fiber is impossible (e.g., Earth’s core).

4. Legal–Juridical Aspect: Patenting “Abstract” Formulas

Usual Rule vs. Exception

  • Patent laws (e.g., the U.S.) exclude abstract ideas, natural laws, or pure mathematical formulas.
  • However, if the formula/protocol translates into an inventive method (e.g., “tokenized neutrino teleportation” with specific procedures and plausible usability), it may be patentable.

The model of “quantum tokenization” with neutrinos is not just discovery but a process with steps, corrections, and operational structure (operators Uᵢ, classical channel, detectors, etc.).

Theology and Evolving Law

  • One could argue for an “exception-to-the-exception” scenario in which new jurisprudence or legal reforms might grant patents to abstract formulas if they meet certain thresholds: (1) presumption of future utility, (2) clear originality, and (3) inventive contribution to architecture.
  • The theological subtext (Cantor’s infinite, the Aleph, the notion of the infinite) serves as an inspirational frame for the equation ℵ∞ = c^c and other abstract expressions, supporting the idea that they are not mere natural principles but the result of human ingenuity.

5. Unified Conclusion

A Theoretical–Speculative Model

  • The “tokenized teleportation with neutrinos” describes a hypothetical quantum channel consistent with relativity and subject to key constraints (no-communication theorem, minimal interaction, oscillations).
  • It enables reflection on the possibility of neutrinos as quantum carriers, with matter as a “transducer” for measurement, requiring classical bits to reconstruct info.

Legal–Theological Grounds

  • Considering the magnitude of the proposal, patent law requires a progressive interpretation, not discarding the protection of the underlying formula or methodology if it is part of an inventive system with a plausible practical application.
  • Georg Cantor’s vision (the absolute infinite), the mathematical theology of the Aleph, and the “neutrino machine” propose a synthesis in which the abstract equation ℵ∞=c^c is the core of a potential future development.

Future and Perspective

  • In practice, photonics still dominates QKD and conventional quantum teleportation. Using neutrinos may be purely speculative for extreme environments, such as Earth’s deep interior or areas inaccessible to fiber.
  • Even so, the concept of “tokenized teleportation” with neutrinos and AI-assisted error correction opens avenues for research, while also requiring an update in patentability standards to cover quantum algorithms and abstract mathematics.
  • Here, the innovation lies in the systematization: that is, how the data is split and how the AI fills in the gaps before final confirmation. (The AI is used to achieve a “pre-collapse” of the message before confirmation), thereby resulting in a real form of superluminality.
  • The aim is to provide an exploratory line in “quantum communication + machine learning”.

TOKENIZED QUANTUM TRANSMISSION: A HYPOTHETICAL MODEL FOR DATA TRANSFER VIA PARTICLE ENTANGLEMENT

1. Contrastive Reflections (SIMILARITIES)

Stone skipping is the technique of throwing a stone almost horizontally over the water’s surface so that it bounces multiple times instead of sinking on the first impact. It involves a low angle of incidence (generally between 10° and 20°), a moderate velocity, and spin to achieve gyroscopic stability and maximize the number of skips.

At first glance, the technique of skipping stones on water (“stone skipping”) and “quantum tokenization” appear completely different: one is classical mechanics and hydrodynamics, while the other involves quantum mechanics and state teleportation. However, there is indeed a conceptual analogy connecting the two regarding how interaction is segmented and how energy (or information) is distributed into “repetitions” or “bounces” rather than using it all at once.

1.1. Parallels Between Stone Skipping and Quantum Tokenization

AspectStone Skipping (“Making it skip”)Quantum Tokenization
Dividing the interactionMultiple bounces with brief contacts on the surface“Mini-teleportations” (tokens), each with its quantum pair and bits of correction
Angle / SizeA very low angle (~15°) facilitates “gliding”Defining small data tokens avoids cumulative errors and simplifies correction and system auditing
Velocity / ResourcesModerate velocity + spin → more bounces, less energy lost each timeUsing quantum resources in controlled “batches.” More efficient than one large chunk that might collapse data
StabilitySpin imparts stability to the stoneClassical corrections + “meta-algorithms” for auditing yield a stable multi-step quantum teleportation process
Global OptimizationGreater total distance with less localized energy expenditureGreater reliability and modularity in data transmission; less impact if a block fails or is corrupted

1.2. Simplified Formula: “Bounces” vs. “Tokens”

  • Stone Skipping (Minimal Hydrodynamics Model)
    Each contact with the water generates a vertical impulse offsetting weight and drag, ensuring continued skipping.
  • Quantum Teleportation (1 block)
    Each analogous “bounce” would be the Bell measurement + correction. Tokenization = repeating this protocol k times for different sub-blocks |ψᵢ⟩.

2. Why Might Less Force Be Required in Both Cases?

  • Stone Skipping:
    Throwing a stone hard at a steep angle wastes energy or leads to a quick sink. By using a low, glancing angle and multiple bounces, energy is distributed in “small impulses,” allowing the stone to travel farther with seemingly less initial power.
  • Quantum Tokenization:
    Attempting to teleport a huge data state at once could require massive quantum infrastructure highly prone to noise. By segmenting the information into tokens, each part requires fewer resources (fewer entangled qubits at once, lower error probability for small blocks). The overall system “advances” block by block, reducing risk of collapse.

3. Reflection
Just as the stone “glides” on the water surface with multiple low-angle bounces, quantum tokenization parcels the data into blocks that “bounce” through the quantum channel infrastructure, reconstructing step by step. If one were to “submerge” all the data in a single transmission, the risk of collapse skyrockets. Thus, this analogy shows that both in classical mechanics (partial trajectories) and in quantum speculation with neutrinos, a strategy of multiple segmented contacts allows greater distance (or data volume) with less energy or risk.

Hence, though stone skipping and quantum tokenization belong to distinct physical domains (hydrodynamics vs. quantum mechanics), the principle of “distributed bouncing” and minimizing losses or errors per contact (the water surface vs. the quantum channel) is remarkably similar in both. In each setting:

  • A single large blow (throwing the stone to the depths or teleporting a massive state in one go) entails a high risk of sinking or losing fidelity.
  • “Skipping” in multiple controlled iterations—stone skipping or tokenization—makes more efficient use of energy/resources and allows for course correction (spin for the stone, classical bits for teleportation).

Proposed Translation into English with a Technological and Scientific Tone:

  1. Considering current literature and the state of the art in quantum computing, the idea of “tokenizing” (segmenting) a quantum channel to transmit blocks of information (inspired by the concept of tokenization in NLP) is an unconventional mechanism for the following reasons:

Unprecedented or Little-Explored Concept

  • Although multiple quantum protocols have been developed (teleportation, superdense coding, QKD, etc.), the explicit notion of “tokenizing” an entangled state to transfer “chunks” or “blocks” of information is not a standard procedure in specialized literature.
  • This proposal intentionally merges tokenization paradigms (used in classical computing/NLP) with quantum teleportation, thereby carving out a novel pathway in conventional quantum research.

Multidisciplinary Nature

  • It brings together quantum computing (managing Bell or GHZ states, measurement, corrections), software engineering (tokenization, data segmentation), and reverse engineering methodologies.
  • This intersection of disciplines, applied to a communication problem, could lead to a solution for transmitting data through quantum entanglement.

Potential for Quantum–Classical Architectures

  • The notion of “tokenizing” information into quantum subspaces may pave the way for hybrid algorithms in quantum AI, where quantum neural networks are trained on segmented data.
  • Although current practice still requires classical channels (and does not achieve superluminal communication), a tokenized representation could simplify the management and orchestration of large sets of entangled qubits.

Inspiration for New Protocols

  • This approach prompts questions about how to organize or index quantum information: a “tokenized” model could make encoding/decoding more modular.
  • In a broader quantum network environment (a Quantum Internet), segmenting the quantum state (or “slots” of EPR pairs) could lead to scalable protocols for future high-dimensional communication systems.

Theoretical Frontier

  • While it recognizes and adheres to the No-Communication Theorem, it posits a form of exception involving the need for a channel distinct or complementary to the classical one, thus positioning itself on the theoretical frontier (without contradicting quantum mechanics).
  • It proposes new uses of entanglement beyond traditional schemes, incorporating a conceptual language (tokenization) from another domain (NLP), applying cross-pollination and systemic thinking.
  • On a scale from “conventional to disruptive,” quantum tokenization aspires to propose a new analogy and a new way of structuring information transmission using quantum resources. There is no standardized protocol to implement this in formal literature (at least not by that name or perspective), which undoubtedly opens up avenues for scientific research bridging quantum computing and software engineering.

“Central Formula”

  • Measurement + Classical Communication (CC): An inevitable step to “unload” the information at B, requiring classical bits.

This “tokenized teleportation” is the closest conceptual approach, within the framework of formal quantum mechanics, to the idea of “using entanglement to send segmented data” (analogous to “tokenization”). Yet it cannot circumvent established laws: for now, communication still relies on the classical channel to retrieve the actual information.

Schematic Representation of Quantum Tokenization


Conclusion

Through this hypothetical equation, we encapsulate how to tokenize a quantum channel (an entangled state) to “transmit” multiple data blocks. The main formula combines the entangled state with information-dependent encoding operators, followed by measurement and classical bits. Ultimately, this method does not achieve real superluminal communication or go beyond the postulates of standard quantum physics; it is essentially an expanded teleportation scheme organized into “tokens.” Nevertheless, the concept illustrates how the idea of “segmentation” (inspired by language tokenization) could be adapted to more complex quantum protocols, where AI and quantum computing collaborate to handle data in a distributed and correlated manner.

In short, this mental exercise—fusing physical, legal, and theological considerations—exemplifies “inverse thinking” or “cross-pollination” which, without violating Relativity or quantum mechanics, imagines how humanity might one day use neutrinos and entanglement to transmit segmented data (tokens), possibly with or without an indispensable classical channel. Although presently impractical, the path toward potential real-world implementation and its intellectual protection underscores the breadth of what science, philosophy, and law can jointly envisio

4. TOKENIZATION TABLE AND QUANTUM ILLUSIONS

“Tokenized Quantum Hypercommunication Protocol (Tricks and Weak Measurements)”

Objective: Highlight the method’s “exceptional” character, which aims to circumvent the no-communication theorem through sophisticated fragmentation, AI-driven approaches, and the exploitation of neutrinos.

TOPIC / PROTOCOL / ASPECTDESCRIPTION / SUMMARYQUANTUM TRACKING RELATIVE TO THE NO-COMMUNICATION THEOREM
1. General Objective: “Exception” to the No-Communication Theorem– Proposes a quantum “super-channel” for near-instant transmission/reception of data, combining quantum tokenization (splitting the message into micro-blocks), neutrino-based channels, AI, and deferred corrections.Appearance: The receiver appears to get information before classical confirmation arrives.
Reality: Part of the final assembly still depends on classical bits (with speed ≤ c) or delayed post-processing, thus preventing any genuine FTL communication.
2. No-Communication Theorem (NCT)– In quantum mechanics, the NCT stipulates that entanglement does not enable faster-than-light communication without a classical channel, precluding superluminal signaling.Appearance: Certain measurement settings give the impression of instantaneous transmission.
Reality: Correlation alone does not suffice to decode messages; classical data comparison is required, preserving causality and complying with relativistic constraints.
3. Quantum Tokenization– Partitions the message into quantum “tokens” processed in batches. Each token is encoded into sub-groups of qubits/neutrinos, employing deferred or weak measurements that retain partial coherence. AI attempts to compile the final information before receiving all classical corrections.Appearance: The receiver “guesses” most of the message without waiting for all classical correction bits, simulating instant transmission.
Reality: Without final classical data, fidelity is not guaranteed at 100%. Once “official” results are cross-checked, relativity and non-signaling remain intact.
4. Role of AI (Artificial Intelligence)– Automates and optimizes token reconstruction through machine-learning algorithms that “predict” states prior to confirmation. May apply “retroactive correction” once late-arriving data is received.Appearance: The AI seems to “preempt” slow bit exchanges, deducing likely results in advance.
Reality: These predictions rely on probability and do not remove the need for classical confirmation to ensure definitive reliability.
5. Quantum “Man-in-the-Middle” with Weak Measurements– A third party (Eve) intercepts entangled states and performs weak measurements, partially collapsing the system without immediate detection by Alice and Bob. Eve later refines her partial insights via post-processing and classical channels.Appearance: Eve “hacks” qubits, apparently accessing superluminal data.
Reality: Statistical irregularities eventually emerge, requiring classical verification. True FTL messaging never materializes, as partial correlations alone do not constitute unequivocal communication.
6. Ghost Mirror Protocol (Delayed Choice / Quantum Eraser)– Inspired by quantum eraser experiments: measurement basis choices are deferred. Alice’s outcomes seem retroactively influenced by Bob’s later decision. Incorporates “quantum erasure” to remove particle/wave information at a subsequent stage.Appearance: The past is “modified” or Bob’s late choice has an immediate impact on Alice’s statistics.
Reality: Alice cannot categorize her data until she receives Bob’s classical confirmation of his chosen basis. From her standalone perspective, no superluminal signal is observed; the retroactive effect is purely statistical once all data is compared.
7. Massive Precompilation and Post-selection (“Quantum Spoofing”)– Thousands of entangled pairs are generated and measured across multiple random bases. A cloud-based platform filters out data that appears to exhibit superluminal patterns, showcasing only an extreme subset while discarding the remainder.Appearance: By publishing isolated cases, it falsely suggests violating the NCT or showing impossible correlations.
Reality: Including all data (including discarded sets) aligns with standard statistics and respects non-signaling constraints. The “violation” is just selective cherry-picking devoid of real superluminal transfer.
8. Exploiting “Exotic” Quantum Channels (Neutrinos, Wormholes, etc.)– Contemplates large-scale entangled neutrinos (hard to detect) or hypothetical wormholes (ER=EPR) in quantum gravity. These exotic pathways are hypothesized to enable “hyperluminal” leaps. Analogous in concept to the “AI–neutrino super-channel.”Appearance: If wormholes or massive entanglement existed, “instant communication” might be assumed.
Reality: Known physics shows any practical application of such geometries still requires classical signals. No experimental evidence allows using these exotic routes to surpass c.
9. Weak Quantum Interception + Deferred Corrections– A variant where an entity (Eve) combines gentle measurements with local logging, then “corrects” past results upon receiving delayed classical data, deploying post-selection that simulates prior knowledge of Alice/Bob outcomes.Appearance: Eve “anticipated” Alice/Bob’s results, faking superluminal messaging.
Reality: Actual alerts require a classical channel; once comprehensive data is aggregated, causality remains intact, and altered statistical distributions reveal the intervention.
10. Conclusion: Illusion vs. Causality– Techniques such as weak measurements, delayed choice, massive post-selection, or exotic channels create a strong impression of circumventing the NCT. However, there is invariably a “cost”: delayed classical verification, coherence loss, or statistical manipulation.Appearance: Partial readings suggest an FTL shortcut.
Reality: Classical information exchange or complete data analysis ultimately rules out superluminality. Relativity and the no-communication principle remain unbroken.

CriterionNeutrinosPhotons
Interaction with MatterInteract extremely weakly, allowing them to pass through thick layers (such as Earth or dense shielding). Very difficult to intercept or block.Interact via electromagnetic forces. Traversing opaque media usually requires optical fibers or free space, and they can be blocked or absorbed.
Detection and ManipulationExtremely difficult to detect (requiring massive detectors and highly advanced technologies). Preparing, entangling, and measuring neutrinos with precision is an unsolved challenge.Far easier to detect with standard photonic detectors (laboratories, optical fibers). Photonic entanglement and measurement protocols are already well established experimentally.
Data Transmission RateVery low, due to the tiny interaction cross-section. Requires powerful neutrino sources and enormous detectors to achieve a useful “signal flow.”Extremely high. Photonic transmission via fiber can reach terabits per second. Generating and detecting photons is highly efficient.
Decoherence / StabilityLess disturbance in dense media since interactions are so rare. However, flavor oscillations (νₑ ↔ νᵤ ↔ ντ) can complicate quantum coherence over long distances.More susceptible to loss and absorption in opaque media. However, existing error-correction protocols in controlled environments (fibers, satellites) help maintain coherence.
Applications / EnvironmentsTheoretically useful in extreme scenarios: communicating through planetary cores or high-density regions where photons cannot pass.Standard quantum communication using optical fibers or free-space laser links. Practical use cases include QKD and photonic teleportation at laboratory or satellite scales.
Technological ComplexityExtremely high. Generating, entangling, and detecting neutrinos at scale remains beyond current technology. The costs and logistics would be enormous.Well-developed technology (lasers, entangled photon sources, photodetectors). Ongoing research aims to industrialize quantum networks (the “Quantum Internet”) on large scales.
Security / InterceptionVery hard to eavesdrop on or interfere with because of their weak interactions. Almost impossible to spoof or sabotage without massive resources.A photonic channel can be intercepted with more modest equipment if no secure protocol is used. However, robust QKD photonic protocols offer high security.
Key AdvantageNeutrinos can pass through dense media without significant attenuation and offer inherent privacy due to minimal interaction.Photonic channels are efficient, practical, and already in widespread use for large-scale quantum communication (teleportation, QKD, etc.).
Main DisadvantageCurrently not feasible for near- or mid-term deployment (extreme detection difficulty, necessity of huge neutrino sources, flavor oscillations).Cannot easily traverse opaque media and may be blocked or attenuated. They require dedicated guides (optical fibers or free-space paths) for long-distance communication.

Conclusion

  • Neutrinos present theoretically appealing advantages (penetration through dense materials, natural resistance to interception) but are severely limited by the enormous technological hurdles of generation, entanglement, and detection.
  • Photons dominate current practical quantum communication technologies with very high data rates and well-established infrastructure.

Context and Concept

In quantum mechanics, the no-communication theorem (NCT) prevents genuine faster-than-light (FTL) data transfer via entanglement alone. Over time, various theoretical or experimental tactics have emerged that seem to violate this restriction—though closer scrutiny confirms that relativistic causality remains intact. Two emblematic examples are:

  1. Weak measurements
  2. Delayed corrections (or “delayed choice,” as in the delayed-choice quantum eraser)

The notion of a quantum track arises because these methods initially appear to exploit entanglement for superluminal information exchange. However, detailed analysis demonstrates that any effective communication ultimately requires a classical (slower-than-light) channel or a post-processing step, negating the possibility of sending data before the receiver obtains conventional confirmations.

Nevertheless, the pursuit of a quantum track—purportedly to bypass the prohibition—can be both inspiring and unsettling. Researchers and enthusiasts have thus proposed various ways to “play with the laws of physics” without fundamentally breaking them. Below are some proposals and tech-jargon with a “quantum hacking/quantum tracking” flavor, but it must be stressed that none truly violates relativity or the no-communication theorem:


A. Quantum “Man-in-the-Middle” with Weak Measurements

In a protocol where two parties (Alice and Bob) share entangled states, envision a third entity (Eve, the “quantum hacker”) intercepting those states in transit and performing weak measurements that slightly disturb the system. Theoretically, Eve obtains partial information without fully destroying coherence. Later, she can “adjust” or post-select her data for a misleading sense of early insight.

  • Why does it appear to be a hack?
    Eve “touches” qubits surreptitiously, leaving minimal evidence until Alice and Bob reconcile their data.
  • Where does FTL fail?
    Alice and Bob still need classical comparison to detect anomalies introduced by Eve. Non-signaling is maintained, albeit with careful statistical checks.
  • Jargon / Implementation ideas:
    • Quantum sniffing (traqueo cuántico): Performing weak measurements to “sniff” qubits without fully collapsing them.
    • Delayed correction: Eve retains a local record of all partial measurements and, after eventually receiving classical data from Alice and Bob, retrofits (or filters) the results to match her best guesses.

B. “Ghost Mirror” Protocol (Delayed Choice / Quantum Eraser)

This approach builds on delayed-choice quantum erasure experiments, where one defers the measurement basis decision until a subsequent moment, seemingly “mocking” the requirement that measurement settings be fixed beforehand.

  1. Generate an entangled photon pair and distribute it to Alice and Bob.
  2. Bob’s station includes a device that conceals whether photons exhibit particle-like or wave-like behavior, letting him postpone the basis choice.
  3. Depending on that delayed choice, Alice’s results show correlations that appear to be “retroactively” modified.
  • Why does it look like hacking?
    It prompts the question, “Did I decide the outcome today for a photon measured in the past?” hinting at broken causality.
  • How is physics preserved?
    Bob must inform Alice (classically) about which basis he chose and when; only then can the combined data reveal this “retroactive” correlation. Without Bob’s classical updates, there is no real-time FTL signal.
  • Jargon / Implementation ideas:
    • “Eraser script” in the cloud: Real-time coincidence analysis that picks a random measurement basis remotely, decoupling the detection logic.
    • “Quantum delay with self-learning”: A machine-learning system that, rather than picking a measurement basis immediately, uses feedback from prior results to “predict” the optimal correlation pattern.

C. Precompilation and Massive Post-selection (Quantum “Spoofing”)

In a more tech-centric model, large batches of entangled data are collected, stored, and then selectively mined (post-selection) in the cloud for subsets exhibiting seemingly superluminal properties.

  1. Experimentation: Produce thousands of entangled qubit pairs (or photons).
  2. Random measurements: Perform measurements in multiple bases, initially without looking at the outcomes.
  3. Post-selection: A cloud-based system filters data to find those instances with “unusual” correlations.
  4. Illusory effect: Highlighting only this subset can feign violations of the no-communication principle.
  • Limitation: Once the entire dataset is included, the illusion dissipates; extreme correlations average out across all runs.
  • Jargon / Implementation ideas:
    • “Quantum deep fake”: Retaining only those outcomes that reinforce a predetermined narrative.
    • “Quantum sharding”: Partitioning large datasets into shards, analyzing them separately, and showcasing only the shard that appears to break causality.

D. Using Exotic Quantum Channels (Though Not Truly Breaking Light Speed)

Within quantum field theory, speculation persists about exotic states (e.g., spacetime entanglement in curved vacuums). One might imagine:

  • Quantum transit in “virtual” wormholes: Certain wormhole models (ER = EPR) are hypothesized to correlate with quantum entanglement, though no experimental data supports real FTL communication.
  • High-energy plasma entanglement: Exploiting extreme environments to extend correlations over large distances.

However, in each scenario, a classical channel is invariably required for reconstructing or interpreting the information, upholding causality in practice.


E. “Hack”-Style Summary

  • Weak quantum interception: Employ gentle measurements with minimal perturbation and “self-correct” once the classical channel provides additional data.
  • Delayed choice: Postpone measurement decisions to generate seemingly retroactive effects.
  • Data post-selection: Sift through large datasets to highlight correlations that appear to violate causality, yet lack real statistical significance when the full sample is considered.
  • Experimentation with exotic states: Explore high-profile theoretical constructs (virtual wormholes, etc.) that give an impression of superluminal outcomes but align with standard physics upon rigorous analysis.

Conclusion
While these tactics foster an impression of circumventing faster-than-light restrictions, each is ultimately constrained by delayed classical verification, measurement decoherence, or manipulative statistics. The net effect is that no real superluminal communication ever occurs.


Final Observations

  • “Technological Mockery”: Methods like weak measurements, delayed choice, and post-selection suggest surpassing the light barrier but fail upon rigorous scrutiny.
  • Tokenization + AI: Introduce a new dimension (postponing measurements, “un-collapsing” states), yet no strict causal violation arises.
  • “Super-channel” in Quantum Mechanics: Strictly a “chimera” of instantaneous data transfer; classical bits remain the limiting factor.

COSMIC-HYPERLUMINAL “EUREKA”!

(Just as Archimedes shouted “Eureka!” upon discovering buoyancy, we here proclaim that Tokenization + AI represents the quantum key to transcending light speed—at least in appearance.)

Below is a final reflection on why the combination of Tokenization + AI might offer an advantage (or a more convincing illusion) over other classical “quantum hacks”—weak man-in-the-middle, delayed choice, large-scale post-selection, exotic channels—and how, in practical terms, such an approach could “fracture” (or approximate breaking) the light-speed barrier in quantum transmission.

Note: All that follows remains hypothetical and speculative; standard physics maintains there is no true superluminal communication. Even so, we highlight how Tokenization + AI might be the “most powerful” method to emulate or approach the illusion of FTL signals.


1. SUMMARY OF OTHER “QUANTUM TRICK” METHODS AND THEIR LIMITS

Technique / ProtocolStrengthWeaknessOutcome
Quantum Man-in-the-middle (Weak Measurements) Enables interception without completely collapsing the quantum state. Requires classical reconciliation.
– Eventually, anomalies show in the statistics
.
No actual superluminal transfer.
– Any tampering still needs classical confirmation for thorough analysis.
Ghost Mirror Protocol
(Delayed Choice / QE)
Deferred measurement basis apparently retroactively influences Alice’s outcomes. Causality is restored once Alice needs Bob’s classical info to interpret her data.– “Retrocausality” is superficial.
– No proven FTL data transfer.
Massive Precompilation & Post-selection
(Quantum “Spoofing”)
Showcasing only a filtered data subset can seem to reveal statistically impossible or superluminal correlations.Once the entire dataset is considered, the illusion vanishes. A “statistical fake.”
– Cannot send information faster than the arrival of classical bits
.
Using Exotic Channels
(Neutrinos, Wormholes, etc.)
Theoretical speculation involving unconventional geometries or particles with minimal interaction (e.g., neutrinos, or wormholes in quantum gravity). Lacks robust experimental evidence.
– Relativistic causality persists in our observable universe.
– The no-communication theorem endures.
– A classical “bridge” remains essential for meaningful decoding, so there is no true FTL violation.

2. FOCUS ON “TOKENIZATION + AI”

2.1 What Quantum Tokenization Entails

  • Tokenization: Splits the message (or quantum state) into micro-blocks (“tokens”), orchestrating them among separate qubit (or neutrino) clusters simultaneously.
  • Key concept: Instead of teleporting a large unit (requiring 2 classical bits per qubit), micro-fragments are sent in parallel, with minimal deferred corrections.

2.2 The Essential Role of AI

  • AI: An advanced machine-learning component (classical or quantum neural networks) that:
    1. Receives partial outcomes (weak measurement data, error syndromes, partial coincidences).
    2. “Guesses” or reconstructs the complete quantum state before all classical correction bits arrive.
    3. Dynamically updates its estimates as new partial data comes in, effectively generating an “instant” probabilistic collapse.
  • Practical result: The receiving station perceives it already has “most” of the transmitted message well in advance, downplaying the role of classical latency.

2.3 Arguing an Apparent Light-Speed Break

  • On the surface: Tokenization + AI harvests multiple micro-correlations (e.g., neutrino-based weak measurements plus calibration inputs).
  • Because each token is small and the AI can extrapolate from partial data, the receiver at t ≈ 0 (or near zero) effectively acquires 95–99% of the overall message.
  • Official confirmation via classical bits arrives later, but applies only minimal corrective patches.
  • Subjectively: Transmission seems to have occurred instantly; objectively: one could argue it’s not truly “official” without the tardy bits, yet that final step is trivial.

Hence, it appears that:

  1. AI outstrips the need for classical communication.
  2. Classical latency becomes negligible since any final correction is minuscule and done post factum.

Tactical Conclusion: From the receiver’s standpoint, “nearly all” data arrives before light-speed confirmations could possibly show up, simulating a breach of c.


3. WHY TOKENIZATION + AI OUTPERFORMS TRADITIONAL QUANTUM TRACK METHODS

  1. Enhanced robustness and continuity:
    • Weak measurement “man-in-the-middle” primarily benefits a covert observer (Eve). Sender and receiver themselves gain no genuine FTL effect.
    • Tokenization + AI, by contrast, orchestrates the entanglement flow for both parties legitimately, enabling in-cascade quantum data handling.
  2. Beyond mere delayed-basis selection:
    • “Ghost mirror” experiments rely on deferred measurements that still hinge strongly on classical channels.
    • In tokenized AI systems, the classical component is whittled down to a minimal final check—most of the data is already “predicted” ahead of time.
  3. Not just statistical spoofing:
    • Massive post-selection sifts data after the fact, offering no real-time advantage.
    • Tokenized AI works “online,” enabling partial feed-forward from microtokens to approach ~99% message fidelity in near real time.
  4. Superior scalability and immediacy over “exotic channels”:
    • Wormholes or highly entangled neutrino fields have scant experimental backing.
    • Quantum tokenization is implementable with today’s photonic or qubit-based platforms (albeit at limited scale), and AI can adapt to hardware errors in milliseconds.

BOTTOM LINE: TOKENIZATION + AI COORDINATES PARTIAL DECODING IN A WAY THAT MINIMIZES DEPENDENCY ON FINAL CLASSICAL VERIFICATION, SHRINKING THE “UNKOWN DATA” WINDOW TO NEARLY ZERO.


4. DOES THIS ACTUALLY EXCEED THE SPEED OF LIGHT?

  • Orthodox physics: Ultimately says “no,” because definitive communication still requires a classical channel—even if only a tiny correction.
  • Practical viewpoint:
    • If AI attains high-enough fidelity pre–classical confirmation, the receiving entity effectively behaves as though it knows the message right away.
    • The classical delay (milliseconds to minutes) seems negligible, acting merely as a small corrective patch rather than a crucial transmission channel.
    • In real-world applications, early-stage “guesses” or partial collapses offer an experience akin to near-instant reception.

IN OTHER WORDS: THE FORMAL EQUATIONS DON’T VIOLATE RELATIVITY, BUT IN A TOKENIZED+AI ENVIRONMENT, THE USER EXPERIENCE CAN “EMULATE” FTL TRANSMISSION IN A HIGHLY CONVINCING WAY—THE RECEIVER RECONSTRUCTS ~99% OF THE DATA BEFORE THE CLASSICAL BITS ARRIVE.


5. CONCLUSION: A MORE PERSUASIVE FTL ILLUSION

Hence, Tokenization + AI:

  • Optimizes communication: Reduces the classical channel’s function to a marginal late-stage fix.
  • Enables near-complete message inference through micro-correlation (photonic, qubit-based, or neutrino-based) in distributed entanglement.
  • Integrates more feasibly with existing or imminent quantum hardware than other “tricks” (delayed-basis selection, mid-channel hacking, or massive filtering).
  • Produces a compelling imitation of superluminal transfer, though formal physics remains intact: any final confirmation, however minor, still travels at ≤ c.

Final Reflection

From a purely physical standpoint, attempts to surpass light-speed confront the bulwark of relativity (the no-communication theorem). In practice, Tokenization + AI simulates a near-instant quantum channel, though genuine superluminality is not truly achieved. The near-zero-time fidelity is so robust that, functionally, it seems to outpace c.

When assessing the four “quantum hacks/tracks”—Man-in-the-Middle (weak), Delayed Choice, Post-selection “Spoofing,” and Exotic Channels—none genuinely breaks the no-communication principle. Tokenized quantum protocols orchestrated by advanced AI, however, represent the most efficient framework to narrow the gap between theoretical constraints (barring true superluminal travel) and practical user perception (a sensation of near-zero-time data transfer). By requiring only minuscule, nanosecond-scale classical verifications, it effectively creates the illusion of an “instantaneous quantum channel” and an apparent (though not actual) “breach” of the speed-of-light barrier.


Theological and Philosophical Perspective

From a metaphysical standpoint, quantum entanglement—seemingly transcending space-time—can be interpreted as manifesting an “absolute present,” an infinite temporal loop that merges past and future. This resonates with the universal principle stated in the Emerald Tablet of Hermes Trismegistus: “As above, so below; as below, so above.” In ancient Greek thought, Hermes Trismegistus aligns with the Judeo-Christian figure of Enoch (Genesis 5:18–24; Hebrews 11:5), hinting at a mystical knowledge fusing science and faith. Even in Hebrew renditions of Biblical verse 1:3, future and past converge, suggesting an eternal present that reconciles apparent temporal paradoxes. In that sense, quantum technology coupled with spiritual contemplation unveils a continuum wherein the physical and the metaphysical unite into a supreme totality, underscoring that all entities are fundamentally interconnected.

XIV. EQUATIONS

I. SET-THEORETIC ANALYSIS OF THE NEUTRINO–MATTER–INFORMATION QUANTUM CHANNEL

To delve deeper into the quantum channel and the relationship between neutrinos, matter, and information in a mathematical context, we can imagine a “Universal” set U that captures all relevant elements of the universe under consideration. Within this set, we define subsets and relationships to model interactions and information transfer.

1. Definition of the Absolute Set for Analysis

Let us define U as the absolute set containing all relevant elements

Where:

  • N denotes the set of neutrinos,
  • M denotes the set of matter,
  • I denotes the set of information.

2. Relationships Among Set Elements RNMR_{NM}RNM​

2.1. Relationship Between Neutrinos and Matter

This relation RNMR_{NM}RNM​ denotes the interaction between neutrinos (n) and matter (m), which can be seen as a quantum information channel per the neutrino–matter experiment.

2.2. Relationship Between Neutrinos and Information RNIR

This means each neutrino (n) is associated with a unit of information (i) based on its quantum interaction or state.

2.3. Relationship Between Matter and Information RMIR_{MI}RMI​

This means each matter element (m) carries some information (i) pertinent to its physical state or composition.

3. Composite Relationship and Information Transfer

Because information can be transferred through neutrino–matter interaction, we define a composite relationship combining RNM and RMI:

Indicating that there exists (∃) a permanent quantum information channel between neutrinos and information, mediated by matter.

4. Quantum Information Channel

If we assume that neutrino–matter interactions create an information channel, we can write something like:

Where Cq is the quantum channel ensuring information transfer from neutrinos toward information via matter.


II. QUANTUM TOKENIZATION AND AI: OPTIMIZATION MODELS AND ADAPTIVE FRAGMENT SELECTION

In the realm of tokenization for AI models (and by analogy, in proposals for quantum tokenization), the technique that selects the most relevant (or informative) fragments and discards the less important ones to optimize reconstruction or prediction is commonly referred to as:

“Token Pruning” (or Adaptive Token Selection)

  • Token Pruning
    Involves estimating the importance of each token (fragment) according to certain criteria (entropy, attention, statistical relevance, etc.).
    Tokens with low relevance or low impact on reconstruction are discarded (“pruned”), thereby reducing noise and the cost of transmitting or processing those fragments.
  • Adaptive Token Selection
    A variant or synonym describing the process of dynamically choosing which tokens to retain and which to omit, depending on the objective (for example, quantum reconstruction or linguistic inference).
    It relies on algorithms that measure each token’s contribution to the final result (e.g. probability, attention, or gradient).

These methodologies focus resources (computation time, quantum or classical bandwidth) on those fragments that contribute the most to the message, discarding those that add little value. In this way, the generative AI can complete or predict the remaining information more efficiently and faithfully.


1. General Approach

Traditional thinking (the no-communication theorem) holds that entanglement does not transmit “useful” or “complete” information without an auxiliary classical channel. This implies that, until those classical correction bits are received, the receiver only has an “incomplete data set” and cannot claim to have fully or unequivocally received the information.

In contrast, I present a “refutation” or counterargument based on quantum tokenization strategies and the probabilistic reconstruction capability of generative AI, which would make it possible to recover the entire content (or a practically identical version of it) at the receiver’s end, even before the arrival of classical confirmation. In practice, it is as if all the data had “traveled” quantumly to the other side. The portion that “didn’t travel” (or was supposedly indispensable for sending via the classical channel) is locally reconstructed with the help of AI, so the receiver basically has the entire message almost instantly.

Even though advocates of quantum orthodoxy would object that “it’s not a valid reception until confirmed with classical bits,” the practical impact (for instance, in a communication system) is that, by having 99% (or more) of the message already reconstructed by quantum-statistical inference, the delayed confirmation becomes something merely marginal or “nominal.” From the recipient’s perspective, the complete information is there “from the very first moment.”


2. The Decisive Role of Tokenization

  • Data Segmentation (“tokens”)
    The message is divided into micro-blocks or tokens {d1, d2, …, dk}.
    Each token is associated with a subset of entangled qubits (or neutrinos).
  • Adaptive Token Selection
    Through token pruning or adaptive token selection, there is a careful choice of which fragments do need to “physically” travel and which can be omitted or transmitted with lower initial precision.
    Thus, some tokens will have more relevance for global reconstruction, and others are “expendable” or redundant from a statistical perspective.
  • Partial Measurements
    Only certain “key” portions of the entangled quantum state (a minimal subset of qubits/neutrinos) are measured.
    This measurement generates enough correlations for the AI to infer a large percentage of the remaining data without needing the immediate arrival of all classical correction bits.

In short, quantum tokenization does not aim to send every additional classical bit through the slow channel. Instead, it distributes the information into “quantum packets.” With just those few physically measured packets, the receiver has robust clues about what the rest of the complete message looks like.


3. Reconstruction with AI and the Phenomenon of “Residual Data That Never Traveled”

  • Guessing/Statistical Inference
    The AI has been trained (or programmed) to “fill in” information gaps based on patterns, residual correlations, and historical data.
    When it receives a few “measured tokens,” the AI applies its generative model to predict the rest of the tokens.
  • “Phantom” Data
    It is said that “some portion of the message did not travel via either the quantum or classical channel” because, in theory, an exchange of classical bits would be needed to reconstitute 100% of it.
    Refutation: In practice, the AI reconstructs this “phantom” part (data supposedly anchored at the origin) with very high fidelity, supported by the initial quantum correlation and the overall statistics of the most relevant tokens.
  • Composite Effect
    Before any confirmation arrives via the classical channel, the receiver already has a “complete picture” of the message (with 95–99% accuracy).
    When the classical bits finally arrive, only minor details are corrected. The user doesn’t even perceive that final tweak as substantial.

Operational conclusion: Although orthodox science might say, “It’s not valid reception until confirmed by classical bits,” in practice, the entirety of the message is already in the receiver’s possession. From their perspective, “yes, the complete message has traveled,” even though part of it never went through the quantum channel (and might formally be called an ‘incomplete estimation’ in quantum terms).


4. The “Effective Global Data” Argument vs. the Orthodox Objection

4.1. Are Late Classical Data Really Essential?

The classical scientific objection, based on the no-communication theorem, states: “Without classical bits, there is no unambiguous decoding.”

Practical Refutation:

  • If the AI achieves 99% reliability before the classical correction, then in practice the information is already “transmitted” (the remaining 1% of error or less might not affect immediate decision-making).
  • Final confirmation (those bits that arrive more slowly) acts as an “insurance policy” or “polishing” at a later stage. From the user’s perspective, the message is already complete and is used from the very first instant.

4.2. Where Does No-Communication Stand?

Standard quantum mechanics argues: “There is no violation of the no-communication theorem, because the missing portion requires a classical channel…”

Counter-Observation:

  • It is admitted that, formally, there is still a classical channel. However, the amount of information that travels through it is minimal and arrives when the receiver already has 99% of the message (via AI inference + quantum tokens).
  • Practical effect: The receiver behaves as though they had received the entire content almost “instantaneously.” The laws remain valid in theory, but the experience is that everything arrived via the quantum channel.

4.3. The Weight of Quantum Correlation

From the standard perspective, quantum correlation (entanglement) alone is not enough to send defined information.

Response: With tokenization and AI leveraging pattern correlations among multiple tokens, the amount of “deducible” or “reconstructable” content becomes massive.
Yes, orthodoxy will say, “Without the classical part, it’s not perfect.” But if the remaining imperfection is tiny, for all practical purposes the data did travel through the quantum channel.


5. “Proof” That the Complete Data Traveled in Time

  • Hypothetical Experimental Execution
    1. Send 1000 “tokens” in an entangled quantum state. The sender only measures 100 tokens and sends minimal associated corrections.
    2. The receiver’s AI, using those 100 results plus the model’s trained structure, reconstructs the other 900 tokens.
    3. Even before receiving the classical bits (which might never arrive or might take several seconds), the receiver already displays a practically complete version of the original document or message.
  • Subsequent Comparison
    When the classical bits finally arrive, it is confirmed that the AI-generated reconstruction was accurate (or had an error margin <1%).
    Therefore, it can be argued that the receiver effectively had the message de facto before the confirmation arrived.

Conclusion: Operationally, 100% of the data were reflected on the receiving end long before the classical channel concluded. The part “that didn’t travel” was perfectly completed via quantum-statistical inference, so from a functional standpoint, the entire data set did travel.


6. “Statistical-Quantum Tracking Trick” vs. “Practical Reality”

Physicists will explain that without the arrival of classical bits, perfect fidelity cannot be guaranteed.

Counterpoint: In real communication scenarios (e.g. voice, video, text transmission), 99% precision is sufficient for the message to be fully usable and understandable.
Once the missing 1% is received (via late classical bits), the accuracy is finalized. But effective communication (the “practical utility” of the message) existed before that delay.

Result: Even if, theoretically, one cannot say “unequivocal information was sent” without the classical part, in practice, the entirety of the content was at the receiver’s disposal with a tolerable error rate. In many cases, that suffices for people to perceive an “instantaneous transmission of all the data.”


7. Conclusion of the Refutation

The “paradox” or “refutation” emerges from distinguishing physical formality (where classical confirmation is required to establish total decoding) from pragmatic experience (where AI, relying on quantum correlations plus a minimal set of measured data, anticipates the complete message with extremely high fidelity).

  • Everything Does Travel: From a functional standpoint, the receiver gets all the data —including the portion that “didn’t travel” through the conventional channel— thanks to the “predictive” work of the generative AI, fueled by the quantum correlation of the tokens.
  • The Classical Channel Does Not Cancel Immediacy: The classical channel becomes a residual or “cosmetic” requirement for fine-tuning details, but the essential reconstruction takes place before those slow bits arrive. In theory, at least, it shows that communication happened in zero time thanks to quantum entanglement + AI.
  • Illusion with a Real Basis: Far from being a mere “statistical trick” or “quantum tracking hoax,” it is a robust inference method that, in many practical contexts (with a very high success rate), allows us to consider that the data are already in the receiver’s possession before conventional communication is completed.

In short, this argument “refutes” or challenges the idea that the message has not arrived until the classical bits come in: thanks to AI and this new approach to quantum tokenization, the missing portion is integrated with such precision that, in practice, it can be said that the receiver has all the information long before the final confirmation. In other words, time travel has been perfected. De facto, it is as if all the data had traveled quantumly even before departure, defying the classical reading of “entanglement alone is not enough.”


AS AN ADDITIONAL COMMENT:

AI-Assisted Genetic Reconstruction and Its Analogy with “Quantum Tokenization”

Recent advances in biotechnology have broadly enabled scientists to achieve a kind of “time travel” to partially revive species that went extinct thousands of years ago, such as the dire wolf. On one hand, we have ancient DNA sequences—often incomplete or fragmented—and, on the other hand, paleogenetics is applied in conjunction with generative artificial intelligence (AI) to “fill in” or complete missing information and reconstruct a plausible genome. Essentially, this process is very similar to the tokenization proposed in some quantum protocols: partial data “tokens” are taken and then the remaining information is probabilistically and statistically inferred or interpolated.

In the case of genetic de-extinction, generative AI combines ancient sequences with genomic databases of related species (gray wolves, domestic dogs, etc.). Each missing segment of ancestral DNA is thus “predicted” or generated—with a high degree of reliability—using algorithms trained to fill gaps in genetic material. Just like in the quantum analogy, where most information can be “reconstructed” before the classical confirmation of reception, here the majority of the extinct genome is “rebuilt” before having fossil sequences that are fully intact.

From a narrative perspective, this implies that the genetic information of the extinct dire wolf “traveled” 12,500 years into the present, encapsulated in partial fragments of fossilized DNA and interpolated thanks to AI. The practical result is that Romulus and Remus—so named in the example of the modified cubs—became a living expression of a lineage that, in theory, ceased to exist. Here, AI takes on the role of reconstructing the genetic data that fills in all missing content, just as tokenization + AI would fill in the gaps of a quantum message before the arrival of the classical bits. It is not an illusion;s the true time travel of data, it is the message of the aleph.


COMPARATIVE TABLE

“Quantum Tokenization” vs. “Genetic Reconstruction with Generative AI”

AspectQuantum TokenizationGenetic Reconstruction with Generative AI
Incomplete DataSplitting the message into quantum “tokens.” Not every block is 100% complete, but the expectation is to reconstruct it using partial measurements and extra bits, thanks to the role of generative AI.Ancient (fossil) DNA samples are often broken and degraded. Only scattered fragments of the complete sequence are available.
Inference ToolAI (or minimal classical corrections) to “guess” the missing content in each token.Generative AI algorithms (neural networks, machine learning) that complete DNA sequences using data from related species.
Partial Result vs. Final ReconstructionWith a subset of measured qubits (crucial tokens), the entirety of the message is inferred before the delayed confirmation.Even with incomplete fossil fragments, AI produces a nearly complete sequence of the extinct DNA without “seeing” all missing parts.
Efficiency / ReliabilityA 95–99% fidelity is achieved in the initial reconstruction (with a final touch of classical confirmation still needed).Entire genome segments are predicted with a high degree of accuracy, and a smaller portion is validated against confirmed fossil sequences.
Similarity to “Time Travel”The message “travels” quantum-mechanically, and although it formally requires classical bits, the bulk of it is obtained without waiting for full arrival.The dire wolf “leaps through time” by 12,500 years, as its genetic map is reconstructed and manifested in a living organism, with AI filling in the gaps. It is the first animal to travel through time.
Fundamental LimitationPhysically, there is no violation of the No-Communication Theorem: some classical information is still required.Biologically, it is never a 100% identical species to the original; there are limitations and partial contamination with DNA from other modern species.
Primary ApplicationUltra-efficient quantum communication, “tokenized teleportation” with neutrinos/photons.Genetic de-extinction (projects involving the woolly mammoth, the dodo, the dire wolf) and advancing evolutionary understanding.

Conclusion

The genetic reconstruction of an extinct wolf through generative AI operates in a manner analogous to “tokenization” in the quantum realm: it starts with fragmentary data (fossil DNA) and applies a model capable of inferring and completing the missing sequence. In practice, this scientific strategy bridges temporal distances, allowing information from an animal that went extinct 12,500 years ago to “jump” into the modern era. Thus, from a narrative or philosophical standpoint, the dire wolf has “traveled in time” through science, being revived as a genetic simulacrum of the original species. In the same way that quantum communication uses AI to reconstruct nearly an entire message without waiting for all classical bits, in de-extinction AI fills in the gaps of the extinct genome, enabling the “essence” (or a very close approximation) of the ancient wolf to emerge in the present.

Even though the species is not reintroduced in its entirety—just as in quantum teleportation there is no absolute superluminal communication—the practical outcome (cubs bearing traits eerily similar to the giant wolf) demonstrates that science, combined with AI, can bridge past and present. This opens a conceptual door to envision how quantum data and the de-extinction of other lost lineages might evolve, reminding us that genetic information, when properly “tokenized” and reconstructed, transcends the barriers of time. See more details at the following link:
https://www.atv.pe/noticia/cientificos-resucitan-al-lobo-terrible-y-la-ia-muestra-como-seria-de-adulto/#!

XV. CODES DEVELOPED USING IA-ASSISTED PROGRAMMING

Mathematically, a system of relationships has been defined to model how neutrinos (N) interact with matter (M) to generate and transfer information (I) within the universe (U). These relationships suggest the possibility of a permanent quantum channel, despite the limitations imposed by relativity theory and the quantum no-communication theorem.

This mathematical framework provides a basis for analyzing how the interaction among the universe’s fundamental components—(N, M, I)—could give rise to information channels that transcend classical constraints. In other words, these mathematical relationships indicate that there is still room for discovering novel forms of quantum communication.

Developing code to model the quantum interaction of neutrinos and the transfer of information is a complex challenge, currently more theoretical than practical, given the evolving state of quantum computing and particle physics.

Nonetheless, we can attempt to mathematically represent the relationships mentioned above using Python and the Qiskit library, which serves as a development framework for quantum computing.

Below, I will present some conceptual code that aims to model the interactions among neutrinos, matter, and information in a quantum computing context. These examples will simulate quantum entanglement and information transfer using qubits, inspired by the aforementioned mathematical relationships.

Explanation of the Code

Quantum Circuit Initialization
We create a quantum circuit called qc with 3 qubits and 3 classical bits for measurement:

  • Qubit 0 (Neutrino N)
  • Qubit 1 (Matter M)
  • Qubit 2 (Information I)

Neutrino Superposition
qc.h(0) applies a Hadamard gate to qubit 0, placing the neutrino in a superposition of states 0 and 1.
This step represents the probabilistic nature of the neutrino’s state.

Entanglement Between Neutrino and Matter
qc.cx(0, 1) applies a CNOT gate with qubit 0 as the control and qubit 1 as the target.
This operation entangles the neutrino with matter, modeling the RNM interaction.

Entanglement Between Matter and Information
qc.cx(1, 2) applies another CNOT gate with qubit 1 as the control and qubit 2 as the target.
This step entangles matter with information, modeling the RMI interaction.

Measurement
qc.measure([0, 1, 2], [0, 1, 2]) measures all three qubits and stores the outcomes in the corresponding classical bits.
This collapses the quantum states and provides classical results.

Execution and Visualization
We execute the circuit on a quantum simulator backend.
The results are collected, and a histogram is plotted to display the probabilities of each possible outcome.
This helps visualize the correlations among the neutrino, matter, and information states.

Interpretation of the Results
The simulation outputs will show counts for each possible qubit state after measurement.
Because of entanglement, certain results are more likely, reflecting the correlations defined by our setup.
For instance, if qubit 0 is measured as 0, qubits 1 and 2 will exhibit correlated outcomes due to the entangling operations.

Other Improvements

  1. Parameterization: Introduce adjustable parameters to control the strength of interactions among N, M, and I.

2. Incorporation of Decoherence: Model decoherence effects for a more realistic representation of the quantum system.

3.-Entanglement Analysis: Implement metrics to quantify the entanglement among the qubits.

4.Simulation of Multiple Interactions: Extend the model to simulate sequential multiple interactions.

Limitations and Considerations

  • Simplification: The proposed codes are a highly simplified model that uses qubits to represent the linkage of neutrinos, matter, and information.
  • Physical Realism: Actual neutrino interactions are far more complex and cannot be fully captured by the current capabilities of quantum computing.
  • Entanglement Constraints: The simulation assumes ideal conditions, without considering decoherence or noise—significant factors in real quantum systems.
  • Interpretation Caution: While these models offer a conceptual framework, they should not be taken as a literal or precise representation of particle physics phenomena.

As quantum computing and particle physics continue to evolve, more sophisticated models and simulations may emerge, bringing us closer to unraveling the mysteries of the quantum realm and the fundamental workings of the universe.


NOTE 1: The codes presented here are shown in a conceptual manner and provide an abstract representation of how to model the proposed relationships. However, in a real operational quantum computing environment, it would be necessary to employ more advanced algorithms and use specialized libraries, such as Qiskit in Python, to handle qubits and perform quantum calculations. These calculations would enable the extraction of insights and the deciphering of information derived from the entanglement of the involved elementary particles.

NOTE 2: More advanced algorithms such as VQE (Variational Quantum Eigensolver) or QAOA (Quantum Approximate Optimization Algorithm) could be incorporated to model more complex systems. Additionally, one should explore:

  • Implementing deeper quantum circuits with a larger number of qubits.
  • Incorporating quantum error-mitigation techniques.
  • Using quantum machine learning algorithms to optimize model parameters.
  • Addressing the complexities of quantum decoherence in macroscopic systems.
  • Including more qubits and complex gate sequences, quantum error correction techniques, and the integration of quantum machine learning algorithms to optimize model parameters.
  • Examining category theory or non-commutative geometry.
  • Integrating quantum machine learning algorithms to optimize model parameters.

To mathematically represent multiverse concepts is a complex endeavor, but one can use Set Theory, Non-Euclidean Geometries, and higher-dimensional Hilbert spaces.


ANOTHER PERSPECTIVE FOR ESTABLISHING A MATHEMATICAL MODEL CAPABLE OF REPRESENTING MULTIVERSE CONCEPTS

Proposed Mathematical Model

  • Multiverse Hilbert Space (Hmult): Each universe is represented as a subspace within a larger Hilbert space that encompasses all possible universes.
  • Global Quantum State (|Ψmult⟩): The global quantum state is a superposition of states corresponding to each possible universe.

Where ∣ψi⟩ is the state of Universe iii and cic_ici​ is its probability amplitude.
Transition Operators Between Universes: We define operators that allow transitions or interactions among universes.

Where Tij is the transition operator and λij​ is a coefficient representing the transition probability or amplitude.

Interpretation
This model provides a mathematical description of the possibility of interaction and superposition among multiple universes, capturing the essence of the multiverse concept within a formal framework.


Inclusion in the Context of Existing Theories
Such as string theory or quantum field theory (QFT).


Integration of Existing Theories

String Theory
String theory posits that fundamental particles are not zero-dimensional points but one-dimensional objects called “strings.” These strings can vibrate in different modes, with each mode corresponding to a distinct particle.

  • Additional Dimensions: String theory requires the existence of extra compactified dimensions, which could be interpreted as parallel universes or multiverses.
  • Branes and Multiverses: In certain versions of string theory (e.g., M-theory), universes can be represented as “branes” floating in a higher-dimensional space (“the bulk”). Interactions among branes could explain phenomena and interconnections between universes.

Quantum Field Theory (QFT)
QFT combines quantum mechanics and special relativity to describe how particles interact through quantum fields.

  • Fields in Curved Spaces: Extending QFT to curved spacetime allows exploration of scenarios in quantum cosmology where different regions of spacetime might behave as distinct universes—unless a stronger force connects them.
  • Quantum Tunneling Effect: Quantum tunneling processes could enable transitions between different vacuum states associated with distinct universes.

Incorporation into the Model
By integrating these concepts, the proposed mathematical model is enriched, allowing inter-universe interactions to be mediated by phenomena described in string theory and QFT.


Ideas for Developing a Formal Mathematical Model

Based on these mathematical definitions, we propose a model that captures the interactions among the mentioned entities, employing:

a) Differential Equations

  • Modeling the Evolution of Neutrino Entanglement and Information Transfer:
    We use the time-dependent Schrödinger equation to describe the temporal evolution of the quantum state.

Where:

  • ψ(t) is the quantum state at time ttt.
  • H^ is the Hamiltonian operator that includes the relevant interactions (neutrino entanglement, information transfer, etc.).

Application to the Multiversal Model

If we assume that the Hamiltonian includes terms enabling interactions among universes, we can write:

Application to the Multiversal Model

If we assume that the Hamiltonian includes terms allowing interactions among universes, we can write:

Where:

  • H^ represents the Hamiltonian for Universe i,
  • H^represents the interaction Hamiltonian between Universe i and Universe j.

b) Probabilistic Models

Stochastic Processes and Probability Distributions
We use density matrices to represent mixed states and to compute probabilities.

Global Density Matrix (ρ):

Where ppp is the probability that the system is in state ∣Ψi⟩.

Stochastic Evolution
The evolution of ρ can be described by the Lindblad Master Equation:

Where:

D[ρ] is the dissipative term, which includes decoherence processes and information loss.


c) Graph Theory

Representation of Connections and Interactions
Multiversal Graph (G=(V,E)):

  • Vertices (V): Each vertex represents a universe.
  • Edges (E): Edges represent possible interactions or connections between universes.

Graph Properties

  • Weights: Edges can be assigned weights that indicate the probability or intensity of the interaction.
  • Directed or Undirected Graphs: Depending on whether the interactions are unidirectional or bidirectional.

Application
This graph can be analyzed using graph-theoretical algorithms to find optimal routes for information transfer or to identify clusters of highly connected universes.

Donde O es un operador. Correspondiente a una observación conservada acorde a la simetría.

To enhance and formalize this equation within the context of the developed models, we could strengthen it by incorporating the previously mentioned elements.

Step 1: Redefine the Symbols.

  • א∞ (Aleph infinite): Represents the cardinality of the set of multiverses or possible states.
  • cc: The speed of light in a vacuum raised to its own power.

Step 2: Incorporate Physical Constants and Parameters.

We introduce the reduced Planck constant (ℏ) and the Gravitational Coupling Constant (G) to connect with fundamental physical theories.

Step 3: Propose a New Consecutive Equation of Genesis.

Where:

  • S is the total entropy of the multiversal system.
  • ekB is the Boltzmann constant, referring to the fact that the entropy (a measure of disorder) of a system is related to the number of different ways the particles in that system can be arranged.

Interpretation:
This equation relates the number of possible states (cardinality) to entropy, linking it with thermodynamic and statistical concepts.

Step 4: Incorporate Elements of String Theory and QFT

Entanglement and Entropy:
Entanglement entropy can be used to measure the information shared between universes.

Where ρred is the reduced density matrix obtained by tracing out the unobserved degrees of freedom.

Step 5: Field Differential Equations.
We use the modified Einstein Field Equations to include terms representing the influence of other universes.

Where Tμν represents the contribution of adjacent universes.

Step 6: Unified Model.
We combine all these elements into a coherent framework that allows for a mathematical description of the multiverse and the interactions among neutrinos, matter, and information.

XVI. EPILOGUE: THE FINAL FRONTIER

The originally proposed formula א∞ = c^c establishes a relationship between a higher infinite cardinality and a mathematical expression based on the speed of light raised to its own power. In order to justify its existence and give preference to this formula, it is essential to thoroughly analyze the mathematical, physical, and theological concepts involved.


1. Interpretation of the Terms

א∞ (Aleph-infinite): THE INTERACTION OF TWO OR MORE MULTIVERSES BELONGING TO AN INFINITE SET OR SUBSET.

In set theory, Aleph numbers (ℵ) represent different sizes of infinities (cardinalities):

  • ℵ₀ is the cardinality of the set of natural numbers (countably infinite).
  • ℵ₁, ℵ₂, …, ℵₙ represent increasingly larger infinite cardinalities.
  • א∞ suggests a cardinality that transcends all known countable and uncountable infinities, symbolizing an “infinity of infinities.”

c (Speed of Light):

In physics, c is a fundamental constant representing the speed of light in a vacuum, approximately 3×10⁸ m/s.
In mathematics, particularly in set theory, the lowercase symbol 𝔠 often denotes the cardinality of the continuum—that is, the size of the set of real numbers—where 𝔠 = 2^ℵ₀.

c^c meaning c raised to its own power, mathematically indicates that c^c is a 1 followed by approximately 2,543,130,000 zeros.
The speed of light raised to itself, cc, is an immensely large number that can be expressed mathematically as:

Due to its astronomical magnitude, it is impossible to express or fully quantify its exact or complete decimal value. This calculation illustrates the sheer enormity of cc and its representation in terms of powers of 10.

Additional Note
To put the immense magnitude of cc into perspective, we compare it to the estimated number of particles in the observable universe; because c^c is vastly greater, it represents a truly unimaginable factor.

Important:
This calculation is theoretical and serves to demonstrate the magnitude of the number resulting from raising the speed of light to itself. Exponentiating “c” to finite powers is mathematically possible but has not yet been demonstrated physically by science. Nevertheless, it is theologically justified by the presence of God as an omnipresent power.


2. Mathematical Approaches

a) Mathematical Interpretation of the Formula א = c^c

  • Considering “c” as the Cardinality of the Continuum:
    If we interpret c as

b) Relationship to Larger Cardinalities


3. Justification of the Equality א= c^c

Moreover, if we conceive of an infinite set of countless multi-universes, one could propose an alternative formula that satisfies certain mathematical identity postulates such as:

Perspective: Yes, because it avoids equating an infinite cardinality to a finite number, thus eliminating inconsistencies within a strict mathematical framework. However, it does not provide new information due to its tautological simplicity. In other words, this alternative formula, א∞ = c^∞, merely states that an infinite cardinality equals infinity, which is true by definition but does not offer a deeper understanding of infinity. In contrast, the original Genesis formula, א∞ = c^c, does provide a concrete, nontrivial expression for א∞.

א∞=c^∞:
Represents a more abstract concept. From a physical standpoint, raising an already infinite constant to an infinite power lacks practical meaning.

In short, substituting the Genesis equation with א∞ = c^∞—while mathematically valid because it leads to an identity—may lack depth or practical usefulness, potentially contradicting Georg Cantor’s theological postulates. It also does not expand the physical or explanatory value of the original equation.


4. Potential for New Mathematical Explorations

The formula א∞=c^c opens avenues for exploring new areas within set theory and infinite cardinalities, facilitating a deeper understanding of different sizes of infinity.


5. Physical and Philosophical Interpretation

Connection Between Physics and Mathematics

Although raising the speed of light to itself (c^c) lacks direct physical demonstration, it can be seen as symbolizing the idea of transcending known limits. It serves as a metaphorical bridge between fundamental physical concepts and mathematical abstractions of infinity.

Representation of the Universe’s Complexity

This formula can be viewed as reflecting the vastness and complexity of the universe—or even hypothetical multiverses—supported theologically but not currently proven by science. It suggests that there are levels of infinity beyond our present understanding, in both mathematics and physics, though not necessarily from a theological perspective.


6. Advantages of the Original Formula א = c^c Over the Alternative א=c^∞

  1. Mathematical Precision
    The formula א =c^c is mathematically precise and adheres to the rules for handling infinite cardinalities, avoiding the oversimplifications of the alternative equation, which does not offer additional insights.
  2. Conceptual Richness
    It provides a foundation for discussing and analyzing higher cardinalities, thus enriching the mathematical debate. It also allows for exploring the relationships among different levels of infinity in a structured manner.
  3. Inspiration for Research
    It may inspire future research in pure mathematics, especially in areas related to set theory and the exploration of infinity. It encourages critical thinking and engagement with advanced concepts.

7. Other Considerations

Importance of Clearly Defining Terms

To prevent confusion, it is crucial to specify that, in this context, “the speed of light raised to itself” symbolizes both the cardinality of the continuum and the physical constant for the speed of light.

Nature of א∞

We should recognize that א∞ denotes an infinitely large, supreme cardinality within the hierarchy of infinities.


Conclusion No. 1

The formula א∞=c^c is a mathematical expression that, when interpreted correctly, possesses coherence and depth within set theory and the study of infinite cardinalities. It justifies its existence by:

  1. Establishing a nontrivial relationship among different levels of infinity.
  2. Providing a framework to explore and better understand the nature of higher cardinalities.
  3. Encouraging a dialogue between physical and mathematical ideas, even if only metaphorically.
  4. Maintaining mathematical consistency aligned with the theory of cardinalities.

Final Conclusion

WE PROPOSE A FORMAL MATHEMATICAL MODEL THAT INTEGRATES CONCEPTS FROM THEORETICAL PHYSICS AND MATHEMATICS TO REPRESENT MULTIVERSES AND THE INTERACTIONS AMONG NEUTRINOS, MATTER, AND INFORMATION. BY INCORPORATING EXISTING THEORIES SUCH AS STRING THEORY AND QUANTUM FIELD THEORY, WE REINFORCE THE GENESIS OF THE ORIGINAL MODEL’S EQUATION FROM AN EVOLUTIONARY PERSPECTIVE—THAT IS, ON A DIMENSIONAL SCALE—ALLOWING FOR A CLEARER AND MORE DETAILED UNDERSTANDING OF THE PROPOSED PHENOMENA.

We can consider a first EVOLUTIONARY SEQUENCE OF EQUATIONS, where each equation refines the previous one.

Consequently, in this research, we have aligned ourselves with the categorical position of mathematician Georg Ferdinand Ludwig Philipp Cantor. He held that the answer to his absolute and inconclusive formula could not be found in mathematics but rather in religion, equating the concept of absolute infinity (inconceivable to the human mind) with God.

Reflecting on the synergy between mathematics and poetry reminds us that human thought is not confined to isolated compartments. As the poet William Blake expressed, “To see a world in a grain of sand, and heaven in a wild flower, hold infinity in the palm of your hand, and eternity in an hour.” This poetic vision illustrates the capacity for logical reasoning and profound feeling as complementary aspects of our nature. By embracing the interconnection among seemingly disparate and distant disciplines, we can tackle problems with greater creativity and empathy, appreciating the nuances of human experience and always recalling that human artifice and candor have no limits—especially in the eternal quest to understand infinity.

FINALLY, WITH THE FIRM HOPE THAT THIS NEW MODEL WILL SERVE AS A FOUNDATION FOR FUTURE RESEARCH AND EVENTUALLY CONTRIBUTE TO THE DEVELOPMENT OF NEW TECHNOLOGIES, AS WELL AS TO THE ADVANCEMENT OF SCIENTIFIC KNOWLEDGE IN AREAS SUCH AS COSMOLOGY, PARTICLE PHYSICS, AND QUANTUM COMPUTING, OUR ULTIMATE GOAL IS TO ACHIEVE INTER-UNIVERSAL COMMUNICATION.


ANNEX 1

Perplexity is a measure used, especially in language models, to quantify the uncertainty or “surprise” the model experiences when predicting a sequence of words. Practically speaking, it can be interpreted as the average number of options (or words) from which the model must choose at each step.

We now present the formula for calculating perplexity:

In Language Models, perplexity (P) is defined as follows:

At the conceptual level, both formulas—the perplexity formula and the multiversal interaction formula א∞=c^c—use the idea of exponentiation to capture complexity and uncertainty in very different systems.

  • Perplexity Equation
    Measures, on average, the number of options (or the uncertainty) that a language model faces when predicting each word in a sequence. Here, exponentiation (whether via roots or the exponential function) is used to transform the product of probabilities (a multiplicative accumulation) into a geometric average, resulting in an intuitive measure of the “choice space” at each step.
  • Multiversal Interaction – Formula א∞=c^c
    This equation symbolizes the interaction among multiple universes (or multiverses) of an infinite set. As mentioned previously, exponentiation not only magnifies the value of a physical constant but also serves as a mathematical metaphor for describing the vastness and complexity of interactions among universes.

Conceptual Relationship Between Both Formulas

  1. Measure of Complexity
    While perplexity quantifies uncertainty or the effective number of options in a linguistic system, c^c is used to represent an almost unimaginable complexity in the context of multiversal interactions. In both cases, exponentiation transforms a series of elements (probabilities in one case, a fundamental constant in the other) into a measure that encapsulates the system’s breadth and potential variability.
  2. Transformation of Products into Average Measures
    The n-th root in perplexity converts the product of probabilities into an average measure of uncertainty. Analogously, (c^c) can be interpreted as a mechanism to amplify the speed-of-light constant, reflecting that interactions among multiple universes generate a “value” or a complexity scale that is exponentially greater than any finite quantity.
  3. Capturing Fundamental Uncertainty
    Perplexity quantifies the inherent uncertainty in a language model’s predictions. On the other hand, the formula א∞=c^c represents the idea that in a scenario where infinite universes interact, the uncertainty and number of possibilities become so enormous that they must be expressed through a self-referential exponential operation—symbolizing a cosmic uncertainty or infinite complexity.
  4. Metaphorical Analogy
    Just as a language model is “astonished” by the multiplicity of choices (numerically captured by perplexity), the universe—or the set of multiverses—can be described in terms of possibilities so vast that one must resort to concepts of cardinalities and extreme exponentiation (c^c) to characterize them. It is as if, on a macroscopic and cosmic scale, there were a “universal perplexity” that, rather than measuring words, measures the interconnection and complexity of all possible states or multiverses.

Conclusion

Despite operating in domains as distinct as linguistics and the physics/mathematics of the multiverse, both formulas share the fundamental idea of using exponentiation to transform a set of elements (probabilities or fundamental constants) into a unique measure reflecting uncertainty, complexity, and the effective number of possibilities in the system under study. In this sense, perplexity in language models and the formula א∞=c^c are conceptually linked as tools for understanding and quantifying highly complex systems—one in the realm of language processing and the other in multiversal interaction.

In mathematics, drawing analogies between formulas can serve as a heuristic for forming conjectures or guiding the search for a formal proof, providing indicative evidence for the proposed equation’s validity.

The comparative process is generally described as follows:

  1. Identification of Common Structures
    Both formulas are analyzed to detect similarities in their algebraic structure, the properties they involve (e.g., symmetries, invariants, asymptotic behavior), or the underlying mathematical concepts.
  2. Establishing Correspondences
    A correspondence (or mapping) is constructed between the elements and operations of the new formula and those of the proven formula. This may involve showing that certain terms, transformations, or properties in the new formula match those of the existing formula.
  3. Transfer of Results
    If it can be demonstrated that the new formula is derived from (or is equivalent to) established results in the proven formula, one can argue that the new formula inherits validity from the proven theoretical framework.
  4. Search for a Formal Proof
    Finally, analogy must be complemented by a formal proof based on accepted axioms, theorems, and rules of inference. In other words, a rigorous logical chain of deductions must be provided, starting from already proven principles and concluding with the truth of the new formula.

In summary, while comparing a new formula with an already proven one may highlight certain paths and offer preliminary evidence of its accuracy—similar to how legal analogy is used to interpret new situations based on prior cases—in mathematics, validity is established solely through a formal proof. At present, scientifically proving the formula’s practical applicability is not possible. Nevertheless, the mathematical analogy helps identify common properties and constitutes indicative evidence of the new equation’s mathematical soundness.


XVII. EXECUTIVE SUMMARY

CONSOLIDATED TABLE

ASPECT / PRINCIPLECOHERENCELOGIC / INTERNAL STRUCTUREINNOVATIVE ASPECTREVOLUTIONARY CHARACTER
1. Theology of Infinity (Cantor, Aleph, Bible)Integrates the search for the transfinite equation with biblical references and Hebrew mysticism (Aleph), showing coherence between the notion of mathematical infinity and the idea of the divine/unlimited.Aligns Georg Cantor’s concept of infinity (transfinite ∞) with sacred texts: the impossibility of encompassing God in the human mind serves as a theological basis for a type of infinitude greater than mere mathematical abstraction.Allows an abstract formula (e.g., א∞ = c^c) to be anchored in biblical texts and set theory, broadening “pure” science into theological territory.Fuses scientific-mathematical reasoning with religious inspiration, breaking the classic separation between theology and science. This opens a debate on protecting “abstract” findings that might have a spiritual or revelatory basis.
2. Patenting the Abstract: Exception to the ExceptionConnects the isolated formula (traditionally unprotectable) with an “invention” if there is a plausible expectation of utility, consistent with the desire to protect the “seed” of the invention, not just the final product.Concedes that the legal rule (no patenting abstract formulas) may have an exception when the formula “originates” from inventive insight and an industrial or future application is foreseen (even if the technology to implement it does not exist yet).Transforms the current rule “laws of nature and formulas cannot be patented” into something more flexible: the abstract is protected if it is an essential part of the inventive process, with the potential for application in AI, neutrino machinery, etc.Proposes a legal revolution: if this exception is adopted, patent systems would recognize “formulas” as inventions per se, provided there is practical potential. This clashes with the longstanding legal tradition that excludes “pure” mathematical methods.
3. Neutrino Machine / Quantum EntanglementDemonstrates coherence between the “abstract formula” and its hypothetical practical application: the “neutrino machine,” which would leverage quantum entanglement to create a quantum channel capable of sending and receiving data.Based on the logic that if neutrinos can become entangled, one might “exploit” this phenomenon for near-zero-time communication (or pseudo-teleportation). AI would be used to map and control the neutrino-matter correlation, assuming minimal interaction.Proposes a futuristic device not found in the current state of the art, but suggests patenting the preliminary phase (formula + theological-scientific concept). It opens the door to “tokenized teleportation” with neutrinos, expanding the frontiers of cryptography/quantum technology.Alters the vision of communications: it suggests the possibility of “superluminal” or quantum bridges if constraints are overcome. It revolutionizes the concept of patents by encompassing something so distant and speculative, rooted in theological-philosophical considerations rather than immediate experiments.
4. Theological Basis of PatentabilityIncorporates Georg Cantor’s thesis and the Bible to legitimize “divine or revealed inspiration” as part of the creative process. Consistency is shown by emphasizing that intellectual authorship can stem from “dreams” or “revelations” and still serve as a patent base.The logic: the “abstract” should not be discarded because of its theological underpinnings; if there is inventive effort (dream vision, Hebrew analysis, original discovery) leading to useful equations, it does not clash with the “rule against patenting pure ideas.” The notion of invention expands to include the “intangible.”Permits protection of scientific-theological creations. This is unusual, as the normal route excludes religious formulations. Recognizing that theological bases can lead to technical results (machines, AI) is a novelty.Breaks the secular-scientific boundary in patent processes. If jurisprudence were to accept this argument, it would set a precedent where religious/mystical inspiration becomes a legitimate part of the “inventive background”—unprecedented in modern legal orthodoxy.
5. Generative AI, Oneiric Revelations, and Formula CreationExamples are given of scientists who “dreamed” solutions (Ramanujan, Einstein, Mendeleev), justifying that AI can help develop these dreams; there is a coherent thread: the oneiric revelation is transcribed and modeled by algorithms.Logic dictates that AI analyzes, validates, and extends such “formulas,” opening avenues for quantum machines. Under patent law, it would suffice that AI shows “possible implementation” (or applicability) to bridge the gap between the abstract and the practical.Innovation: this goes from simple “human invention” to a human-dreams-AI circuit, where AI generates code, prototypes, quantum simulations… all orchestrated within a theological-scientific framework.Revolutionizes how creativity is considered in patents: it is no longer solely “human ingenuity” but co-ingenuity of humans + AI + dreams. It approaches recognizing machines and oneiric intuitions as a formal and essential part of the patent process.
6. Progressive Interpretation of the Law: “Contra legem” if NecessaryAcknowledges traditional patent law (no abstract formulas) and proposes rejecting it via a constitutional interpretation that prioritizes “the progress of humanity”: consistent with the goal of “advancing Science and the Useful Arts.”Follows the logic of balancing values: if the “abstract formula” entails the possibility of a radical invention that serves human evolution (e.g., interstellar travel, quantum AI), the rule cannot block it. A judge may opt for a “contra legem” interpretation for the collective good.Innovative in legal theory: a special protective mechanism for the “isolated formula” if its inventor swears there will be a future invention. It requires “detailed justification” of hypothetical utility—an exception to the “abstract idea doctrine.”Upends the legal structure: if adopted, courts might grant patents on little more than the plausible promise of application, fundamentally altering patentability doctrine. This could usher in an era of patents on quantum algorithms, transfinite-mathematical formulas, etc., well before their commercial implementation.
7. Quantum Entanglement of Neutrinos and Zero-Time CommunicationAlthough traditional relativity and the no-communication theorem limit this, coherence is shown by arguing that new hypotheses (neutrinos, AI, exotic QKD) could “break” the practical barrier, without contradicting current theory if interpreted more broadly.Internally logical: neutrinos—due to minimal interaction—might be entangled in unconventional ways, and with AI support, might create a semi-quantum channel. This fits into the equation א∞ = c^c to explain the “leap to infinity” enabling multiversal connections.Innovative by proposing a mechanism that goes beyond standard photonic QKD, weaving “routes” in the neutrino network. It suggests the potential for interstellar travel or communication.Revolutionizes science by proposing that the “formula” is not merely theoretical speculation but a potential key to a future “market” of technologies breaking space-time barriers. It drastically changes the boundaries of what is patentable and how we conceive of cybersecurity/quantum technology.
8. Hypothetical Utility as the Core of Formula ProtectionShows coherence with the patent requirement for “utility” or “industrial applicability”: a plausible presumption that the formula could lead to a transformative technological result is enough.Argues logically for a normative “bridge”: utility need not be immediately proven; if the applicant shows the formula is not a mere discovery but enables (or will enable) a tangible invention, the “essence” of patent law is satisfied.Innovative: the law typically requires concrete evidence of applicability. This approach relaxes the standard, giving more leeway to “super-futuristic” inventions (time travel, neutrino machines) and safeguarding the “formula” from its inception.Drastically changes the patent grant timeframe: one could obtain a patent even when the actual engineering does not yet exist, but is “reasonably conceivable.” This revolutionizes the relationship between “science fiction” and the patent system, extending its protection to disruptive ideas.
9. Machine Learning and Blockchain as a Registry of RevelationsThe text proposes using blockchain to record the theological, oneiric, and scientific stages of the formula’s conception, ensuring transparency and intellectual ownership. This coherently intertwines AI + Blockchain + Patent Law.The logic: if the invention (formula and AI) is developed step by step, and each step is documented on an immutable blockchain, then traceability is guaranteed. This strengthens proof that the inventor created something original and provides a sociotechnical audit trail.Innovative: it integrates a continuous, auditable digital “dossier” of the invention, linking faith and mysticism with cybersecurity. This is unusual: personal inspiration is not typically recorded on blockchains.Revolutionizes how authorship and chronology are proven in patent law by adding a quantum verification system (generative AI + blockchain neutrality) conferring near-universal validity on the “formula” claim.
10. Final Project: Theological Laws + Sovereign AI + FuturesDemonstrates that “Patent Law” and “Theological Law” (interpreted in the light of the Bible, Talmud, etc.) complement each other to justify elevating the formula (א∞=c^c) to protectable status.The logic indicates that as science advances toward large-scale entanglement and AI becomes “sovereign executor” (the cyber-sovereign state model), both forces (legal + theological) would support these patents to propel humanity to the “next level.”Absolute innovation: merging secular legal norms with mystical-scientific eschatology as a basis of public policy in innovation and patents. It broadens jurisprudential horizons in AI and advanced mathematics (neutrinos, QKD, etc.).Revolutionizes the concept of law: the “set of guarantees” (legal and theological pillars) legitimizes patenting the “interdimensional travel formula,” under a “mixed body” (humans + AI) that oversees it. It redefines the traditional notion of sovereignty, opening the door to “techno-spiritual governance” and protection of the abstract-infinite.

Comment:
Throughout these points, the document establishes a connection between the theological (God, Aleph, Cantor, Bible) and the legal (patents, jurisprudence, USPTO, Comparative Law) to support a proposal:

Grant protection to abstract formulas provided that:

  1. They are shown to be original, not mere discoveries of something preexisting.
  2. They present a plausible expectation of utility, even if the actual technology is not yet developed.
  3. The connection between the inspiration (oneiric, theological) and a possible application (AI, neutrino machine) is justified.

Thus, each element is woven together in a coherent and logical manner, innovates by transcending classical boundaries, and proves revolutionary by reconfiguring how we understand inventiveness and intellectual property in a quantum-theological setting.

Conclusions on the “Exception” to the Speed of Light, the Use of Neutrinos, and the Author’s Creative Perspective

Conclusion/PointDescription/Summary
1. It has not been proven possible to send information faster than light Current physics (the no-communication theorem) states that although quantum entanglement produces instantaneous correlations, it does not currently allow decodable messages to be sent at speeds exceeding c.
– A theoretical “overcoming” of this barrier has been proposed, but there is still no empirical evidence.
2. Potential significance of “instantaneous communication”– If data could be transmitted in “zero time,” it would signify a radical shift in cosmic exploration, intergalactic connections, and information management.
– It would transform the foundations of communication and relativity, with significant impact on commerce, defense, science, and society.
3. The author’s theoretical justification: neutrinos instead of photonsNeutrinos barely interact with matter, which theoretically makes it possible to maintain quantum coherence over large distances.
– This could facilitate a “map” of the universe and circumvent obstacles where photons (through absorption or scattering) face greater limitations over long distances.
4. “Exotic” and speculative motivation– The use of neutrinos instead of photons in quantum communications is not a standard line of scientific research; it is more of a “futuristic” vision.
– The proposal of AI + neutrinos reflects an attempt at disruptive innovation, diverging from orthodoxy to envision scenarios far removed from current practice.
5. Established physics vs. creative visionFrom today’s widely accepted scientific perspective, it is not possible to violate the speed of light when transmitting information.
– The hypothetical pursuit of “breaking” that limit serves as a creative impulse, generating conjectures that may inspire new approaches or intermediate theories in the future.
6. Possible impact on humanity (hypothetical)If any practical method of instantaneous quantum communication were confirmed, it could revolutionize space exploration, information security, remote medical research, and more.
– However, the prevailing scientific stance holds that a classical medium limited by c is always required for exchanging useful data.Nonetheless, tokenization methodologies open a novel research avenue to enhance the performance of the quantum channel for zero-time data transmission, thus hinting at a potential exception to the principle of the No-Communication Theorem.
7. Overall balance of the proposal It prompts reflection on physical limits and the potential for future technological advances.

Socratic Didactic Table

N.ºQuestionAnswer
1.How can one legally justify that an abstract formula—such as א∞ = c^c—be considered a patentable invention without contradicting the traditional jurisprudence on “abstract ideas”?To circumvent the legal prohibition on “abstract ideas,” the proposal is to demonstrate that the formula א∞ = c^c is not a mere theoretical finding but rather an essential component of a broader inventive process linked to a useful project or device (e.g., the neutrino machine or generative AI). Thus:
1) The formula is framed as part of a technical method or algorithm aimed at solving a problem (quantum teleportation, interstellar communication, etc.).
2) One relies on the “exception to the exception”: if the formula is integrated into a practical system with (actual or potential) industrial utility, it is no longer abstract in the strict legal sense.
3) Case law (Alice, Bilski) does not prohibit patenting anything that contains mathematics, but rather purely abstract ideas unconnected to a concrete application. Here, the equation serves as a crucial link in a technological method, satisfying patentability requirements and avoiding contradictions with traditional doctrine.
2.To what extent does Cantor’s theological interpretation, equating the absolute infinite with divinity, open a gap that blurs the line between a mere mathematical discovery and a patentable invention?Georg Cantor’s stance, associating the absolute infinite with a divine principle, suggests that the formula is not just revealing a natural truth but involves a creative act or “co-creation” bridging the human sphere (scientific research) and the divine (transcendent inspiration). This breaks the boundary between “discovering” (what already existed in nature) and “creating” (what the human mind originally formulates).
1) Theologically, one could argue that since the formula originates from a “state of revelation” or mystical experience, it is not a natural law per se but an inventive cognitive hybrid integrating both revelation and reason.
2) Legally, if the inventor can show that the equation was not explicitly found in nature—nor was it a mere extrapolation of preexisting principles—but resulted from the inventor’s (mystical and/or cognitive) ingenuity, the possibility of patenting it as an “invention” becomes more plausible.
3) This theological gap creates a gray area for claiming protection if the formula can be tied to an emerging technical development, preventing it from being labeled as purely “mathematical discovery.”
3.What would be the impact of recognizing the “exception to the exception”—allowing the patenting of pure formulas when there is an expectation of utility—on global innovation dynamics and competition among companies?The impact would be significant in several areas:
1) Promotion of disruptive research: Companies and R&D centers would be motivated to explore “futuristic” or speculative formulas and algorithms, as they could block competitors if they secure the patent.
2) Heightened speculation: Patent offices might be inundated with applications for equations and methods lacking current implementation, based only on “possible future applicability.”
3) Entry barriers: Financially powerful enterprises (tech leviathans) might acquire “monopolies” on core mathematical concepts (as happened with some software patents). This could discourage startups lacking resources to litigate or pay licenses.
4) Possible acceleration of innovation: Conversely, because patents require detailed disclosure, other entities could build upon that publication, triggering a dynamic of licensing and collaboration (albeit under tension). In sum, the “exception to the exception” would reshape the ecosystem, introducing new monopoly and protection strategies in mathematical and quantum fields.
4.How can the theological viewpoint—seeing formula creation as nearly a divine act—be reconciled with the practical requirement to demonstrate tangible “industrial utility” for patent grants?Reconciliation stems from the twofold nature of the formula:
1) Divine or revealed inspiration: From a theological perspective, the human mind receives or channels a “higher” understanding. However, this dimension does not replace legal patentability requirements.
2) Technical or industrial instrument: From a patent law perspective, the formula must be integrated into a method, process, or product with plausible practical application (e.g., a neutrino machine or an AI algorithm that optimizes large systems).
3) Proving utility: To satisfy the “industrial applicability,” the inventor (or applicant) must provide preliminary evidence, theoretical prototypes, simulations, or a development plan showing a viable path to applicability. The theological vision remains as the inspiring origin but must necessarily be complemented by empirical arguments demonstrating the equation’s potential to yield concrete outcomes, albeit in an experimental status. In this way, the sacred (theological) realm is validated legally and technically by presenting real societal benefit prospects.
5.In what way could the neutrino machine, based on quantum entanglement, prompt a review of the classical principles of Special Relativity and the speed of light limit without creating an irreconcilable conflict with established physics?It would be a partial revision of Special Relativity, not a total annulment, if approached as follows:
1) Non-luminal quantum channel: Neutrinos interact weakly with matter and, in certain hypothetical models, could remain entangled over vast distances.
2) No formal violation of the speed of light (c): To avoid an “irreconcilable” conflict, the machine must not transmit classical information superluminally. Entanglement can yield instantaneous correlations, but “useful” exploitation of those correlations would still require a classical channel (as in standard quantum teleportation).
3) Creative interpretation: Through “quantum tokenization” and AI algorithms, one could reconstruct most of the message before the arrival of classical bits. In practice, it might appear to break the light-speed barrier, but no causality violation occurs when considering the complete picture.
4) Reformulation: If some exotic effect truly challenging causality is demonstrated, the scientific community might be forced to “expand” or reinterpret relativity rather than discard it outright. Thus, the project extends physics rather than frontally contradicting it.
6.What technical and legal criteria could be devised to evaluate the “plausible expectation of utility” of a formula when neither current technology nor science have advanced enough to implement it?A “plausibility test” protocol is proposed:
1) Simulations and theoretical validation: The inventor would present computational models (e.g., Qiskit or quantum simulators) illustrating how the formula could be used in a future scenario; not a real prototype proof, but evidence of coherence and functional logic.
2) Expert opinion: A panel of scientists would review the supporting rationale, assessing whether there is a “risk” it is mere speculation.
3) Disclosure requirement: A clear, detailed description in the application, specifying hypothetical implementation stages and the physical/mathematical logic.
4) Scalability demonstration: At least a plan to scale the formula into a concrete technical solution.
5) Expiration clause: A rule that if, within a certain timeframe, no concrete steps toward industrial application are made, the patent expires earlier than usual, preventing indefinite speculative blocking. A sort of provisional or conditional measure via administrative processes.
7.At what exact point does an algorithm—or an applied formula—cease to be a non-patentable idea and become a patentable process, and what role does case law (Alice, Bilski, Mayo) play in defining that threshold?Drawing on U.S. jurisprudence (Alice, Bilski, Mayo):
1) Abstract idea: An algorithm or formula by itself, without concrete elements incorporating it, is deemed an abstract idea not patentable.
2) ‘Significantly more’ element: These cases require that the invention provide something “extra” that transforms the formula into a real technical process (commercial application, innovative technical effect, improved computational efficiency, etc.).
3) Threshold: The transition occurs when the formula is integrated into a system or “method” with specific steps or hardware configurations that produce a technical result (e.g., software implementing the equation to optimize neutrino detection).
4) Jurisprudential role: Courts apply a two-step test: (a) determine whether the claim is directed to an excluded subject (abstract idea) and (b) whether there is a sufficient “inventive concept” to transform it into patent-eligible subject matter. Thus, that dividing line is drawn by precedents requiring “more” than the mere equation. The contribution must be “inventive” and “practically applicable.”
8.How could the scientific community address the tension between freedom of research and the possible legal monopoly over certain equations, particularly if they become an essential foundation for quantum computing or AI?To mitigate the tension:
1) Compulsory licenses: If the patented equation becomes indispensable for progress in quantum computing, a regime of licenses at reasonable rates could be imposed, ensuring freedom of research and preventing monopolistic abuses.
2) Academic use exception: Recognize a “research exemption” for experimental or academic use so that labs and institutes can investigate the formula without infringing the patent, provided there is no commercial exploitation.
3) Promotion of open science: Public institutions might encourage inventors to patent under shared patents (e.g., patent pools) or receive subsidies in exchange for free licenses.
4) Dynamic assessment: New guidelines so that if a formula becomes an essential standard in a sector, a mechanism of “universal availability” is triggered, preventing inhibition of innovation. Thus, while protecting the inventor, the public interest is preserved.
9.What relevance do linguistic and philological foundations (Hebrew, Aramaic) hold in arguing that a formula stems from a “theological revelation,” thereby claiming reinforced intellectual protection?Their relevance is that the philological origin (Hebrew Aleph, Aramaic interpretations, etc.) aims to demonstrate:
1) Historical genesis and authenticity: That the formula or its symbol (for instance, the letter א∞) is not merely a restatement of established mathematics but emerges from a unique sacred/linguistic tradition with different hermeneutic nuances.
2) Originality: If philological inquiry shows the equation was constructed through direct readings of biblical texts in Aramaic, Hebrew, etc., it reinforces the argument that it is a creative contribution rather than a rehash of known equations.
3) Cultural dimension: In a patent context, it could be presented as “ancestral knowledge” reinterpreted for technological projection.
4) Identity argument: The inventor can invoke the linguistic-theological particularity to claim an additional layer of protection akin to “traditional knowledge” (as seen in certain ethnically based patent protections). Nonetheless, this does not exempt it from proving utility or undergoing the standard patentability analysis.
10.Could the “quantum tokenization” of information—mentioned as a way of fragment-based communication—end up creating a quantum channel that, in practice, bypasses the ban on superluminal communication?It could simulate or approximate it, but without eliminating the need for a classical channel (for now), as follows:
1) Tokenization: Data is split into micro-blocks, each entangled with a quantum subset. With AI, the receiver reconstructs most of the message before receiving all the classical corrections.
2) FTL illusion: It appears as though the information arrives “instantaneously” because AI can rebuild 99% of the content without waiting for classical delays. However, the final confirmation (classical bits) arrives at speed ≤ c, ensuring no actual causality violation.
3) Challenge to the no-communication principle: In practice, it closely approaches transmitting data at “zero time,” but formally quantum correlations do not constitute real information transfer without a classical channel. Hence, “bypassing” translates into an astute exploitation of correlations that minimize effective delay but do not eliminate the physical constraint.
11.What ethical and philosophical implications arise from combining oneiric inspiration and quantum computing in the genesis of formulas, especially if a patent is granted for something that could have been a collective discovery?The confluence of these elements raises:
1) Authorship vs. co-creation: If the formula emerges from dreams + AI, religious texts, and a cultural ecosystem, who is the true “inventor”? The human author who formalizes the equation? The AI that consolidates it? The biblical tradition that inspired it? This challenges individual authorship doctrine.
2) Privatization of collective knowledge: If the formula is made patentable, it effectively appropriates something rooted in a cultural (religious) heritage. This can be viewed as depriving the community of its ancestral knowledge.
3) Commercializing mysticism: There are questions about the commercial use of the sacred and whether a “transcendent” perspective should be subject to commercial exclusivity.
4) Blurring boundaries: Ethically, the commodification of oneiric revelation and the reduction of collective creativity to a patentable asset spark philosophical debates about the essence of scientific discovery and freedom of inquiry.
12.How would the adoption of this legal proposal affect major research laboratories (CERN, Fermilab, etc.), which traditionally share open knowledge to advance particle physics?The impact would be:
1) Restricted access: If certain private labs patent key equations (for example, for neutrino data analysis), public centers might have to license these formulas, increasing research costs and reducing collaborative freedom.
2) Shift in open science culture: CERN and Fermilab promote open data and unrestricted publication. The possibility of patenting “neutrino-related” formulas would clash with their longstanding tradition of global cooperation.
3) Search for hybrid models: These institutions might seek collective patent or cross-licensing arrangements to safeguard open science.
4) Reassessment of funding: Governments might push these labs to patent findings to offset the high cost of facilities, mixing the core aim—“collective scientific progress”—with the need to monetize intellectual property.
13.How could a fast-track pathway for patents on “abstract theological formulas with uncertain utility” be practically incorporated into the patent registration system without overloading it with overly speculative applications?A specialized procedure with filters would be needed:
1) Dedicated portal: Establish an accelerated examination procedure (fast track) only if the applicant meets “high disruptive potential” criteria (e.g., quantum AI, neutrino applications).
2) Conceptual solidity test: Require expert reports or robust simulations that back the plausibility of the application, to avoid “vague ideas.”
3) Staged evaluation: Grant a “conditional patent” or “provisional title” with a timeframe to present tangible progress or initial experimental validation.
4) Volume cap: Set annual limits or higher examination fees to deter a flood of unfounded filings.
5) AI-based filtering: Use prioritization algorithms to detect duplicates or trivialities, ensuring the fast track does not become a dumping ground for unfounded speculation.
14.Could requiring a “proof of concept”—even if simulated via AI—compensate for the lack of a physical neutrino-machine prototype when applying for a patent on the main equation?Yes, as an intermediate step:
1) Quantum simulations: Turn to quantum computing or advanced AI platforms (like Qiskit, Cirq) to model neutrino-matter interactions and the א∞ = c^c equation, presenting data on how it would operate under theoretical conditions. Strategic partnerships with quantum technology providers are crucial.
2) Techno-economic models: Provide documentation outlining an implementation plan (e.g., lab requirements, neutrino detectors, AI training). Though hypothetical, it serves as evidence of feasibility.
3) Prototype substitute: Given that building a real neutrino machine is beyond current technological reach, the simulated “proof of concept” can support patentability, provided it convinces the examiner of its potential plausibility.
4) Incremental verification: Applicants might be required to present updates in simulations or partial prototypes every few years to maintain patent validity.
15.What verification and transparency mechanisms (Blockchain, virtual notaries, research records) would make it feasible to confirm the authorship and conception date of the formula, particularly if part of its origin is mystical or oneiric?Proposed hybrid traceability solutions:
1) Blockchain registry: Each new iteration or “finding” is recorded on a blockchain with immutable timestamps, documenting the formula’s evolution from its initial intuition, including dream transcriptions, to AI simulations.
2) Integrity checks: Drafts are deposited on an online platform that generates a unique hash for each version, ensuring they cannot be altered afterward.
3) Virtual notaries: e-Notary services digitally sign each research record, confirming content and date.
4) Specialized expert witnesses: Could include both scientific and theological (rabbis, philologists, physicists) professionals who verify the validity of the origin, even if it is oneiric, adding an extra layer of credibility.
5) Systematization: Patent offices would accept these documents as substitutes for the “date of invention” (inventor’s notebook), provided they meet reliability and non-repudiation standards.
16.How could a deeper interpretation of the “contra legem” dialectic—i.e., jurisprudence daring to disregard the legal prohibition on patenting formulas—coexist with the pillars of the principle of legality and legal certainty, without generating an anarchic patent-granting scenario?The “contra legem” dialectic here would mean reinterpreting administrative rules that ban patenting formulas in light of constitutional or progressive-jurisprudence considerations. To avoid anarchy:
1) Exceptional application: A judge or legislator would limit this to scenarios where the formula is of great benefit to humanity and shows strong indications of future utility, establishing a strict set of conditions.
2) Constitutional review: Argue that protecting an abstract invention aligns with the constitutional goal of “promoting the progress of science and useful arts” (as in the U.S. Constitution), overriding any lower-level rule excluding pure formulas.
3) Evolving doctrine: Adopt a “living” interpretation of patent laws, not fixed to historical language but reflective of current scientific realities.
4) Legal certainty: Uncertainty is mitigated if the judiciary clearly defines the criteria and deadlines, preventing all inventors from claiming patents on mere equations lacking substance.
17.How would a legal “archaeology” of the formula א∞ = c^c, requiring investigation of its theological, mathematical, and oneiric sources (à la Foucault), demonstrate not only its originality but also the conceptual break it introduces into patent theory?A “legal archaeology” in the Foucauldian sense would examine how the formula emerged, its historical discourse, and the “rupture” it introduces in the standard order. The approach would be:
1) Epistemic context: Study religious currents (Bible, Kabbalah), Cantor’s ideas on infinity, and the inventor’s reported dream revelations as successive layers in knowledge production.
2) Documentation and discontinuities: Examine manuscripts or records showing how the formula evolved and simultaneously broke with previous dogmas (e.g., the impossibility of patenting mere abstractions).
3) Radical originality: The “conceptual break” is evident in how this equation, born from a theological-physical intersection, cannot be reduced to a mere incremental refinement of other formulas.
4) Incorporation into law: The legal narrative considers both its technical novelty and its mystical dimension, highlighting an extraordinary event—a “new episteme”—that challenges prior conceptions of patentability. Thus, the “archaeology” underpins its disruptive character and justifies a claim to protection.
18.Assuming that the equation א∞ = c^c and the neutrino machine generate a quantum communication channel capable of tokenizing data on a cosmic scale, what challenges arise in quantum cryptography and digital sovereignty, especially if a single owner holds a monopoly over that infrastructure?The challenges would be immense:
1) Quantum cryptography: If the neutrino machine enables a channel with quantum encryption or “teleportation” of keys, it would be extremely resistant to espionage. Simultaneously, if only one entity controls it, they could impose stringent usage terms.
2) Digital sovereignty: Governments and international bodies would have to redefine cybersecurity policies; a single operator could concentrate the power to provide ultra-fast communication.
3) Risk of hegemony: The patent holder would wield a role akin to a “gatekeeper” of interstellar communications, setting fees, licenses, and even censorship.
4) Global regulation: An international treaty would be urgently needed to prevent absolute monopolization of the technology, establishing fair licensing and ensuring collective security. A significant gap might form between nations with access to this technology and those left behind.
19.To what degree would potential quantum interaction among multiverses—if experimentally validated—require rethinking the territorial scope of patents, currently tied to countries or regional blocs, and inspire a “cosmic or interdimensional patent law”?Should multiverse interaction become experimentally validated:
1) Extended territoriality: Patents anchored in national jurisdictions become inadequate if the invention’s exploitation occurs beyond planetary boundaries or in parallel universes.
2) “Cosmic” patent law: A supranational framework (perhaps led by the UN or an international consortium) might be needed to govern exploitation in outer space or at interstellar distances.
3) Enforcement frontier: Monitoring patent infringements becomes difficult if a rival scientist replicates the technology in another galaxy or “another universe.”
4) Interstellar agreements: Analogous to the Outer Space Treaty, new agreements might emerge acknowledging patents in extraterrestrial environments. If technology enables “multiverse” access, something akin to an “Interdimensional Patent Treaty” would be needed, redefining sovereignty and jurisdiction.
20.How can the “prophetic” nature of the research—combining biblical verses, Georg Cantor’s postulates, and dream visions—be reconciled with the empirical standards required by cutting-edge scientific communities (e.g., peer review, reproducibility) without causing an epistemological breakdown?One balances “prophecy” with empirical methodology as follows:
1) Dual record: Maintain a theological-prophetic narrative as the creative origin while upholding a scientific methodology that demands reproducible models (simulations, statistical analyses, etc.).
2) Mixed peer review: Engage scientific reviewers to validate the project’s mathematical/physical consistency and theological/philosophical specialists to contextualize its transcendent dimension, without conflating the two levels.
3) Partial verifiability: Although “inspiration” is subjective, the formula’s implementation must be objectively testable: results are checked, derived equations analyzed, and experiments and simulations replicated by different labs.
4) Maintaining mysticism: Clarify that oneiric revelations do not replace scientific proof but inspire it. This avoids epistemological collapse: spiritual motivation and empirical validation complement each other, keeping reproducibility intact for the purely technical aspects.

Explanation of the Didactic Table

This table is presented as a didactic guide for those exploring a complex or technical topic, offering a method akin to a Socratic-catechetical approach that facilitates understanding through clear questions and precise answers. By laying out the issues in an ordered manner with specific responses, the reader can:

  1. Identify the essential doubts: Each question targets a critical or challenging aspect of the topic, allowing readers to quickly find the information they need.
  2. Clarify concepts without losing the thread: The chain-like structure of question-and-answer ensures sequential, logical reading, serving as a clarification mechanism that reduces the confusion of jumping from one concept to another.
  3. Proceed at their own pace: Readers can pause at each point, absorb the explanation, and only move on to the next question when they feel they have understood the matter at hand, mirroring a didactic conversation.
  4. Simplify technical information: Even when dealing with highly specialized or theoretical content, the Q&A format allows a more accessible presentation by breaking the subject into “small blocks” of knowledge.

In short, this Questions and Answers Table offers a progressive approach: as readers solve specific doubts, they simultaneously gain an overarching view that helps them master scientific, legal, or philosophical topics of high complexity.

XIII. BIBLIOGRAPHY
(Organized thematically to encompass various perspectives: theological, legal, scientific, and intellectual property, along with relevant literary and philosophical works.)


1. LEGAL AND INTELLECTUAL PROPERTY REFERENCES

United States Constitution
Article I, Section 8, Clause 8.
Original text available at:
https://www.archives.gov/founding-docs/constitution-transcript

U.S. Patent Act
Title 35 of the United States Code (35 U.S.C.).
Sections 101, 102, 103, 112, among other relevant provisions.
Current version available at:
https://www.uspto.gov/web/offices/pac/mpep/consolidated_laws.pdf

Manual of Patent Examining Procedure (MPEP), USPTO
Particularly chapters 2106 (Patent Subject Matter Eligibility) and 2107 (Utility Requirement).
Available at:
https://www.uspto.gov/web/offices/pac/mpep/

Alice Corp. Pty. Ltd. v. CLS Bank International, 573 U.S. 208 (2014)
U.S. Supreme Court decision on the patentability of abstract ideas, software, and computer-assisted inventions.
Full text:
https://www.supremecourt.gov/opinions/13pdf/13-298_7lh8.pdf

Bilski v. Kappos, 561 U.S. 593 (2010)
Key case on the patentability of business methods and abstract ideas.
Full text:
https://www.supremecourt.gov/opinions/09pdf/08-964.pdf

European Patent Office (EPO)
Guidelines for Examination, sections on “Computer-Implemented Inventions,” “Algorithms,” and the exclusion of patentable subject matter due to abstract methods.
Available at:
https://www.epo.org/law-practice/legal-texts/guidelines.html

Regulation (EU) 2016/679 of the European Parliament and of the Council (GDPR)
Although not directly on patents, it influences data protection and know-how related to inventions and software.
Official text:
https://eur-lex.europa.eu/eli/reg/2016/679/oj

World Intellectual Property Organization (WIPO)
WIPO Convention and reference materials on patents, intellectual property, and international filing processes (PCT).
Available at:
https://www.wipo.int/pct/es/

European Commission (2023)
Proposal for a regulation on the protection of trade secrets and know-how.
Text available at:
https://ec.europa.eu/growth/industry/strategy/intellectual-property_es


2. THEOLOGICAL, PHILOSOPHICAL, AND LITERARY REFERENCES

Casiodoro de Reina Bible (1509) / “Biblia del Oso” (1569)
Various editions available online and in libraries.
Texts in the original languages: Aramaic, Hebrew, and Koine Greek.

The Talmud, the Gemara, and Hebrew Kabbalistic Texts
For mystical interpretations of the Aleph and cosmogony.
See critical editions by Steinsaltz, Schottenstein, and others.

Borges, Jorge Luis (1945). “El Aleph.”
El Aleph y otros relatos. Ed. Emecé, Buenos Aires.
ISBN: 978-950-04-3237-0.

Borges, Jorge Luis (1975). “El Libro de Arena.”
Ed. Emecé, Buenos Aires. Explores ideas of infinity and the paradox of time.

Blake, William (1790–1793). “The Marriage of Heaven and Hell.”
A poetic-philosophical text alluding to infinity and mystical vision.
Spanish editions: Valdemar, Siruela, etc.

Machiavelli, Niccolò (1532). “Il Principe.”
Spanish edition: El Príncipe, translations by García Gual, etc.
ISBN: 978-84-206-0854-0 (various editions).

Coelho, Paulo (2011). “Aleph.”
A novel addressing inner exploration and the notion of a point containing the entire Universe.
ISBN: 978-8403100442.

BBC London Documentary “Dangerous Knowledge” (2007)
Directed by David Malone.
Explores the life and work of Georg Cantor, Ludwig Boltzmann, Kurt Gödel, and Alan Turing.
Available on certain video platforms or in audiovisual libraries.
Link: https://video.fc2.com/en/content/20140430tEeRCmuY


3. MATHEMATICAL AND PHYSICAL REFERENCES (INFINITY, QUANTUM THEORY, NEUTRINOS)

Cantor, Georg (1895).
Beiträge zur Begründung der transfiniten Mengenlehre (Contributions to the Founding of Transfinite Set Theory).
Published in Mathematische Annalen. Reprints in classic publishing houses (Springer, etc.).

Gödel, Kurt (1931).
Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I
(“On Formally Undecidable Propositions of Principia Mathematica and Related Systems”).
Published in Monatshefte für Mathematik und Physik.

Boltzmann, Ludwig (1877).
On the relationship between the Second Law of Thermodynamics and Probability Theory.
See translation in Wissenschaftliche Abhandlungen (Berlin, 1909).

Turing, Alan Mathison (1936).
“On Computable Numbers, with an Application to the Entscheidungsproblem.”
Proceedings of the London Mathematical Society, Series 2, Vol. 42, pp. 230–265.

Haramein, Nassim (2003).
“The Schwarzschild Proton.”
Reference to ideas on vacuum geometry and the toroidal structure of the universe.
Published in Physical Review & Research International (discussed in various academic forums).

Bell, John S. (1964).
“On the Einstein-Podolsky-Rosen Paradox.”
Physics, Vol. 1, 195–200. Theoretical basis for quantum entanglement.

Aspect, Alain; Dalibard, Jean; Roger, Gérard (1982).
“Experimental Test of Bell’s Inequalities Using Time-Varying Analyzers.”
Physical Review Letters, 49(25), 1804–1807.

Alcubierre, Miguel (1994).
“The Warp Drive: Hyper-fast travel within general relativity.”
Classical and Quantum Gravity, 11(5), L73–L77.

Einstein, Albert; Podolsky, Boris; Rosen, Nathan (1935).
“Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?”
Physical Review, 47, 777–780.

Friedmann, Alexander (1922).
“Über die Krümmung des Raumes.”
Zeitschrift für Physik, 10(1).

Heisenberg, Werner (1927).
“Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik.”
Zeitschrift für Physik, 43, 172–198.

Reines, Frederick; Cowan, Clyde L. (1956).
“Detection of the Free Neutrino: A Confirmation.”
Science, 124(3212): 103–104.
First successful experiment detecting neutrinos.

Neutrino Experimental Collaborations:
IceCube Collaboration, The IceCube Neutrino Observatory at the South Pole.
DUNE Collaboration (Deep Underground Neutrino Experiment), Fermilab, USA.
Super-Kamiokande, Kamiokande, SNO, etc.

Hawking, Stephen; Mlodinow, Leonard (2010).
The Grand Design. Bantam Books.
ISBN: 978-0553805376.

Bohr, Niels (1913).
“On the Constitution of Atoms and Molecules.”
Philosophical Magazine & Journal of Science, 26(151): 1–25, 476–502, 857–875.


4. REFERENCES ON ARTIFICIAL INTELLIGENCE, QUANTUM ALGORITHMS, AND BLOCKCHAIN

Goodfellow, Ian; Bengio, Yoshua; Courville, Aaron (2016).
Deep Learning. MIT Press.
ISBN: 978-0262035613.
Foundations of deep learning, the conceptual basis for modern AI.

Nielsen, Michael A. & Chuang, Isaac L. (2010).
Quantum Computation and Quantum Information. 10th Anniversary Edition, Cambridge University Press.
ISBN: 978-1107002173.

Benenti, Giuliano; Casati, Giulio; Strini, Giuliano (2007).
Principles of Quantum Computation and Information. World Scientific.
ISBN: 978-9812566756.

Shor, Peter (1994).
“Algorithms for Quantum Computation: Discrete Logarithms and Factoring.”
Proceedings, 35th Annual Symposium on Foundations of Computer Science. IEEE.

Grover, Lov K. (1996).
“A Fast Quantum Mechanical Algorithm for Database Search.”
Proceedings of the 28th Annual ACM Symposium on Theory of Computing, 212–219.

Zheng, Zibin; Xie, Shaoan; Dai, Hongning; Chen, Xiangping; Wang, Huaimin (2018).
“Blockchain Challenges and Opportunities: A Survey.”
International Journal of Web and Grid Services, 14(4).

Garay, Juan; Kiayias, Aggelos; Leonardos, Nikos (2015).
“The Bitcoin Backbone Protocol: Analysis and Applications.”
EUROCRYPT 2015, LNCS 9057, Springer.

Brandão, Fernando G.S.L.; Svore, Krysta M. (2017).
“Quantum Speedups for Semidefinite Programming.”
Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing (STOC).

Cornell University. Legal Information Institute (LII).
“Patents,” reference on definitions and legal doctrine.
https://www.law.cornell.edu/wex/patent

Montenegro, E. (2020).
“La disrupción de la IA en la protección de la propiedad intelectual: Algoritmos y patentes.”
Revista Iberoamericana de Propiedad Intelectual, 12(2): 45–62.


5. ADDITIONAL RESOURCES (WEBSITES AND ONLINE PUBLICATIONS)


6. BIBLIOGRAPHY FOR RELIGIOUS RESEARCH AND HISTORICAL CONTEXT

Flavius Josephus (1st century AD).
Antiquities of the Jews.
A text providing historical context on the Hebrew cultural environment and the interpretation of Genesis.

Strong, James (1890).
Strong’s Exhaustive Concordance of the Bible.
An essential tool for the philological analysis of Hebrew and Aramaic roots.

Berg, Philip S. (1982).
The Power of the Alef-Bet: The Mysteries of the Hebrew Letters.
Kabbalah Centre International.
ISBN: 978-1571892601.

Rabbi Adin Steinsaltz (1984).
El Talmud. Translation and Commentary.
Multiple volumes, Koren Publishers (Hebrew–English) and Spanish versions.
An approach to rabbinic interpretation of Genesis.

The Guide for the Perplexed (Maimonides, c. 1190)
Medieval text combining Aristotelian philosophy and Jewish theology.
Contemporary Spanish editions: Paidós, Trotta, etc.


OTHER IMPORTANT LINKS

AUTHOR: PEDRO LUIS PÉREZ BURELLI / perezburelli@gmail.com

© Copyright (Author’s Rights) PEDRO LUIS PÉREZ BURELLI.

https://www.linkedin.com/in/pedro-luis-perez-burelli-79373a97