top of page
  • Writer's picturebrillopedia


Author: Faria Abdulla, IV year of LL.B(Hons.) From Unity Law College.


The paper reflects on the importance of subjectivity in a post artificial intelligence (AI) world, especially with regards to civil activities.

Indeed, Artificial Intelligence (AI) is shaping the future of humanity across nearly every industry. It is already the main driver of emerging technologies, and it will continue to act as a technological innovator for the foreseeable future.

Thus, integrating AI with subjectivity (which is the driving force of human beings) is of core importance.

In addition, it is clear that even if AI is not currently a significant participant in social life, it will be in the near future. Despite the potential dangers associated with endowing AI with some kind of subjectivity, such a course is inescapable, and should be considered sooner rather than later.

To better prepare for the future society in which artificial intelligences (AI) will have much more pervasive influence on our lives, a better understanding of the difference between AI and human intelligence is necessary. Human and biological intelligence cannot be separated from the process of self-replication. Therefore, a fundamental gap exists between human intelligence and AI until AI acquires artificial life. Humans’ social and metacognitive intelligence most clearly distinguish human intelligence from nonhuman intelligence. Although advances are likely to improve the functioning of AI, AI will remain a function of human activity. However, if AI can learn to self-replicate and thus become a life form, albeit a man-made one, outcomes become uncertain.

It has become quite necessary to reject the myth that the criteria of subjectivity are sentience and reason. The main prospective aim of modern research related to AI is the creation of technical systems that implement the idea of strong intelligence. According to my point of view the path to the development of such systems comes through the research in the field related to perceptions

Here we formulate the model of the perception of the external world which may be used for the description of perpetual activity of intelligent beings. We consider a number of issues related to the development of the set of patterns which will be used by intelligent systems when interacting with the environment.

The key idea of the presented perception model is the idea of subjective reality. The principle of the relativity of perceived world is formulated. It is shown that this principle is the immediate consequence of the idea of subjective reality.


This paper reflects upon the importance of endowing Artificial Intelligence with subjectivity.

Subjectivity is understood here as the concept indistinguishable from legal personhood, however, this does not entail accepting that moral subjectivity is the same as moral personhood. Subjectivity is a complex attribute which may be recognized in certain entities and/or are assigned to others. This attribute is in my opinion; gradable, discrete, discontinuous, multifaceted and fluid.

It means it can contain more or fewer elements of different categories (e.g., Responsibilities, rights, competencies, and so on) which can be added or taken away by a law maker in most cases; the exception of which being human rights, which according to the prevalent opinion, cannot be taken away.

Among others, such character can be seen in contemporary Polish Civil Law, which distinguishes following concepts determining subjectivity;

  • Natural person I.e., human being 9Article 8 sec 1 of Polish Civil Code)

  • Legal (juristic) person I.e., the state treasury and organizational entities in which specific provisions vest legal personality (Article 33 of Polish Civil Code)

  • Other entities I.e., who are not classified as any type of persons but endowed with some claim rights, responsibilities, and/or competencies e.g., animals.

Therefore, while I accept the general spirit of the Bundle Theory of legal personhood proposed by Kurki (2019), I do not agree with all its details. The theory itself is based on the two key tenets:

1. Legal personhood of X is a cluster property and consists of incidents which are separate but interconnected.

2. These incidents involve primarily the endowment of X with particular types of claim rights, responsibilities, and/or competences.

The concept of legal subjectivity itself is open-ended, defeasible and ascriptive in a Hattian sense. I can sincerely paraphrase that: Our concept of an action [here—a legal subjectivity], like our concept of property is a social concept and logically dependent on accepted rules of conduct. It is fundamentally not descriptive, but ascriptive in character; and it is a defeasible concept to be defined through exceptions and not by a set of necessary and sufficient conditions whether physical or psychological (HART,1949)

. It is possible to think of subjectivity, especially legal subjectivity, in at least three ways: (1) philosophically, (2) from the perspective of law in general, (3) from the perspective of a law that is valid in a certain place and certain time. However, it is important to note that these perspectives do not simply refer to the same object viewed in terms of most general to most specific: instead, they represent three different kinds of thinking and concern different objects. However, although these kinds of thinking are often confused, it is important to avoid falling into this trap. Of course, as all three ways of understanding subjectivity influence culture, they also influence each other. The first one can be regarded as religious thinking, in the sense of Finnis (2011), and is often connected with moral subjectivity. The second relates to the subjectivity present in law, but not in the law ; its overlap with the demands of the law of a given country remains a matter of controversy. However, such controversy generally remains unnoticed; it only becomes significant, and of practical value, in times of crisis, especially political or humanitarian ones. The third is purely juristic: it relates only to the concept used in the acts and doctrine of concrete legal systems, as well as in the European Parliament Resolution of 16 February 2017 with recommendation to the Commission on Civil Law Rules on Robotics (2015/2103(INL)) and other documents of European law.


In the literature, two key analogies are used when discussing the possibility of acknowledging legal personality or legal personhood for AI systems: one between AI and animals, and another between AI and juristic persons or collective subjects (Solaiman 2017; Chen and Burgess 2019; Kurki and Pietrzykowski 2017). Many researchers agree that legal subjectivity in the form acknowledged to a human being is unique and cannot be acknowledged to AI, especially because, for now at least, AI does not demonstrate any evidence of being conscious and sentient. In contrast, analogies with animals appear more suitable, as the abilities of AI are limited in relation to humans. On the other hand, AI can be regarded as analogous to collective entities in the sense that it is an artificial creation, a nonbiological one lacking in sensations and consciousness. Besides, according to the traditional Western view, animals and juridical persons are, next to humans, the only true candidates for broader- or narrowly determined legal subjectivity. Many jurists from foreign countries would be surprised to learn that in some countries or cultures, rivers have also been acknowledged as subjects of law, such as the Ganges Jamuna in India and the Whanganui in New Zealand.

However, using an analogy with animals or juristic persons to justify awarding potential legal subjectivity to AI requires a certain superficial assumption. Firstly, this analogy assumes that there is a single hierarchy or sequence of entities, organized according to their degree of similarity to human beings, and, secondly, that the place of an entity in this hierarchy or sequence (based on the degree of development) determines the scope of subjectivity attributed to it. It follows that animals take the lowest place in the hierarchy,

In the same way, the next place could be taken by contemporary AI, which lacks sentience, and its reason is not perfect. The next position up the hierarchy is taken by collective entities, because they lack sentience but have collective reason; such reason corresponds to, and may surpass, human reason because its substrate is human. Finally, at the top of the hierarchy are human beings; these are sentient and have reason which is, according to traditional views, the best, prototypic example of its kind.


While deciding about legal subjectivity, one should rather focus on the fact that the law, as it is assumed here, is not only a human endeavor, but more importantly, a social one: many animals who live a social life also obey some rules which are very similar to human law (Rowlands 2012). Thus, as there are doubts about the existence of private language, there are also justified reasons not to believe in private law, understood as a law imposed by a person on herself; such a concept belongs rather to the philosophical understanding of law. If the social character of the enterprise of the law is recognized strongly enough, it should be clear that true criterion of subjectivity is participation in social life, whatever the role. However, two things should be insisted upon when considering this condition. Firstly, such social activity does not concern active participation, i.e., a sovereign establishing of social relations or entering into some interactions with other people. It is rather about being present in social life. Nowadays even those persons who lack consciousness or reason because of age or health are able to participate or be present in social life, at least in the sense that they have the status of someone’s children or parents (“have the status”—so it is the social and not biological fact): they all play some role in social life and they cannot be ignored or excluded from the social network.(Jonca 2015/2016; Obladen 2016) If they were absent, the network of social relationships would be necessarily different. For example, if my brother were in a coma and because of this reason were to be acknowledged as not existing in the social network, (“Coma”, 1977) his wife would not be my sister-in-law, nor would she be his wife. Secondly, participation or presence in social life may be the result of the social subject holding some intrinsic or instrumental value. However, the possession of such intrinsic or instrumental value does not constitute a sufficient condition for participation or being present in social life: many such objects of value have no ability to participate in social life. Rather, it is the social–relational value that is important, i.e., that which determines the nature of the relationship between the value bearer to another social subject. A painting that excels in artistic categories is intrinsically valuable, because it has certain features; however, it does not influence the character of social relationships.

Certainly, the admission to participate or be present in social life, and any attribution of intrinsic or instrumental value, depends on the given society, time and place. For example, in ancient Rome, although citizens and slaves both participated in social life, the former were assigned intrinsic value, and the latter with instrumental value; however, both were acknowledged as legal subjects, albeit in a broader or narrower scope (van den Berg 2016). When considering this differentiation, a significant fact should be noted: If a given subject participates in social life and is believed to be intrinsically valuable, the natural consequence is that she should be treated, in a prospective rather than prescriptive sense, as a legal subject within this or other scope.


Now let us link the above theories with real life scenarios and understand how does AI actually works?

Does Artificial Intelligence understand subjective realities?

Before we proceed with this, remember we are not talking about the physical world when we discuss subjective reasoning.

Now, to answer the question whether AI understands subjective realities, I’m going to say “YES”

I’m doing so not to contradict the commonsense belief that computers are not living life forms but merely machines. Rather I’m saying yes so as to extend some definitions ever so slightly that we fearfully hold on to. We don’t need to argue that machines are different that human beings. They most certainly are.

Remember we are not talking about the physical world when we discuss subjective reasoning. I’m making a suggestion that basic subjective reasoning is programmable. Indeed, this reasoning power might not be observable just yet but approaching that conclusion as artificial intelligence evolves. From a human perspective we say everything we can perceive to be true by using our conscious brain is subjective reality. Just how conscious does a machine need to be to think and determine what is real and what is not? Humans use memories and experiences to deduce what is real and it can be confirmed later in time. We call this the ability to perceive mentally. Computers on the other hand think by running programs along with memory information to calculate possible outcomes.

Example: A person might visualize a scene of their roommate coming home from a hard day at work. Human beings might reason or have an intuitive conviction that producing a meal, wine and candlelight will result in an evening of pleasant conversation.

An artificial intelligent android would make calculations based on previous outcomes of the same action described above. It could calculate based on previous data what might possibly occur in the future based on what it did in the past or didn’t do. Future scheduled events or modified events based on new updated information only need to be labeled differently. If it has a similar outcome, then that is a reality as well.

Artificial intelligence will never be identical to the human mind but what it can do is simulate. It can and will make plans that can result in actual occurrences taking place. The human plans a meal or the robot plans a meal. Will the roommate eat it and talk about their day? We only need to worry when the robot burns the food because it wants to be taken out to dinner.


Be very, very skeptical of any answer to this question. Anyone who tries to answer definitively probably doesn’t know what they’re talking about.

Here are the problems:

  1. It’s too ambiguous a term. Do you mean the ability to feel or experience something? Do you mean having self-awareness - are you referring to consciousness? Or do you simply mean being able to look at a problem and determine the type of tool necessary to solve it - are you referring to artificial general intelligence? As soon as someone brings up the term “sentience” in relation to AI, it suggests they really don’t understand much about what AI is or will be.

  2. We don’t understand sentience in ourselves well enough to make any kind of real assertion about it. What is the physical mechanism for it? There is much of the human brain we do not know; even if we can point to one part of the brain being involved in sentience, there may be some (or many) holographic representation of it that involves the interplay of several components that we can’t see yet.

  3. Even if we fully understand human sentience (and we can’t say when, or even if, that will ever happen), that does not mean at all that we would be able to replicate it in a machine. The architecture of a computer is fundamentally different from a brain; it may be possible to replicate human sentience in a roundabout way, but it may not. We can’t even speak to the possibility yet, never mind the time frame. And even if we make considerable progress in this field, it seems likely that, far before we actually create sentience, we will be able to create non-sentient AI that can fool anyone into believing it is sentient. So, we may not even be able to know if AI is sentient or not.


The development of science and its technological applications, of course, allows us to extend the system of sensors, based on data from which humanity creates its modern picture of reality.

Eventually, it makes possible to create completely new measurement methods, which allowed mankind to enter into a new world of intel.

AI holds the key to unlocking a magnificent future where, driven by data and computers that understand our world, we will all make more informed decisions. These computers of the future will understand not just how to turn on the switches but why the switches need to be turned on. Even further, they may one day ask us if we need switches at all.

Although AI cannot solve all your organization's problems, it has the potential to completely change how business is done. It affects every sector, from manufacturing to finance, bringing about never before seen increases in efficiency. As more industries adopt and start experimenting with this technology, newer applications will be invented. AI will bring a change even more widespread and sweeping than the introduction of computing devices. It will change the way we transact, get diagnosed, perform surgeries, and drive our cars. It is already changing industrial processes, medical imaging, financial modeling, and computer vision. We are well on our way to tapping into this enormous potential, and as a result, the future holds better decision-making potential and faster.


  • Cook, R. (2014). Everyone who read the medical thriller of Robin Cook “Coma” (1977) knows how it could look like. The crime which is investigated in this book is making People to be in coma, then storing them as anonymous bodies suspended from the ceiling, to sell finally their organs when some buyer comes up. Pan Macmillan.

  • Finnis, J. (2011). Natural law and natural rights. Oxford University Press.

  • Hart H.L.A (1948-1949) The ascription of responsibility and rights. Proceedings of the Aristotelian Society. New Series 49. (1949). John Horty.

  • Jonca, & Opland. (2016). In societies where killing of the newborns, e.g. because of sex or disability, was accepted, the killed children were not included in the social network. They were not counted in the social network. They were not counted as heirs, they were not a part of genealogical tree, they were not registered as some umpteenth children, e.g. The first- born. Such children had no value in the society, neither intrinsic nor instrumental, their existence left no trace [Unpublished doctoral dissertation]

  • Kurki, A. V., & Pietrzykowski Tomasz. (2017). Legal personhood: Animals, artificial intelligence and the unborn. Ghent University Library.

  • Pierce-Rowlands, 1. (2012). Can animals be moral? Oxford University, press Oxford. Notre Dame Philosophical Reviews.


bottom of page