AI Law And AI Ethics Are Brooding Over Those AI Alignment Tax Proposals |

We seem to have a love-hate relationship with taxes, or maybe I should more aptly say hate-love relationship.

People vociferously tend to profess that they utterly hate taxes.

There are lots of reasons stated.

For example, a frequent complaint is that taxes take money out of your pocket. You’d perhaps rather be able to spend the pocketed money as you please, rather than handing the hard-earned dough over to the wisdom of the government for spending.

Another notable qualm is that figuring out your owed taxes can be overly complicated and, shall we say, taxing (pun!). Albert Einstein reportedly said that the hardest thing in the world to understand is the income tax. Maybe he would have had an easier time if the tax laws were written in the language of physics and mathematics (for my coverage of using AI to write and rewrite our laws into a strident computationally tractable form of coding, see the link here).

Some people have an objection to taxes as based on freedom of choice. They adamantly do not like the idea that taxes are often compulsory (usually backed by harsh tax compliance measures such as financial penalties or prison time). It would be one thing if you could voluntarily choose to pay your taxes, and an altogether different matter when being threatened and coerced into paying taxes. As such, the grievance heard is that the government takes taxes rather than people avidly paying taxes.

I suppose that Mark Twain encapsulated this abject dread and devout hatred for taxes in this infamous line: “What is the difference between a taxidermist and a tax collector? The taxidermist takes only your skin.”

We shouldn’t though dwell solely on the doom and gloom side of taxes.

Some people are begrudgingly okay with taxes, as long as the taxes are minimal and considered personally tolerable. Of course, your threshold of having those feelings is bound to differ mightily from person to person, and from circumstance to circumstance. What is too much tax? Those that are quite opposed to taxes on a fundamental basis are apt to say that zero taxes is the only appropriate amount.

Shifting gears, slightly, the word “tax” has taken on a variety of meanings in our society.

Besides the kind of tax that you pay out, there is also the notion of things sometimes being of a tiring or overtaxing nature. You might do an activity and feel taxed by having done it. Going to work each day and grinding away at the workplace can seem altogether quite taxing. You get home at night and are exhausted, worn out, and can barely lift a finger.

The essence of this variation associated with being taxed is that we allow a definitional range encompassing the consumption of nearly any kind of resource or semblance of power or energy. Your emotions can be taxed when dealing with a thorny issue facing your loved ones. Your physical strength can be taxed when you attempt to lift a heavy object. The generalized facet is that you are incurring a “tax” upon whatever effort or activity you are undertaking.

Can a tax such as a conventional income tax be construed as being outright goodly or worthwhile?

Well, you would certainly have to concede that society seems to think so to some extent.

We have somewhat collectively permitted ourselves to be taxed. Presumably, we do so to garner the various services and values from the government as a result of providing taxes to those in authority. History suggests that taxes have been around for a long time. It is said that Ancient Egypt was one of the first systematic instances of establishing a formalized tax. Most countries of the modern world have some kind of tax, whether directly or indirectly imposed.

You might be wondering why I am dragging you through a seemingly prosaic exploration of taxes and the act of taxing.

Here’s why: Artificial Intelligence (AI).

You see, there are some that believe we should be giving serious consideration to taxing AI. This taxing aspect would ostensibly be imposed to right a wrong if you will. There ought to not be a free ride for AI, some insist, and as such placing a tax on AI is the proper thing to do.

If you are scratching your head and wondering what in the world these people are talking about, I’d like to refer you to my coverage of aligning AI with human values, see the link here. The concept is relatively straightforward. All of the AI that is being shoveled out into society is not necessarily well aligned with the needs and preferences of humankind. As I will explain in a moment, we have not only AI For Good, but we also regrettably have AI For Bad. The hope is that somehow we can curtail the AI For Bad. Stop the AI For Bad in its despicable tracks.

By some form of magic or trickery, we need to ensure that AI is aligned with humankind’s values and mores. A catchphrase that has bubbled up in the AI community and even become known to those outside of the AI insiders is that we desperately and surely need AI Alignment. One way or another there is a burning need to cajole or perhaps force those devising AI to make sure that their AI is aligned with suitable and sufficient human values. For my extensive and ongoing coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.

One approach for garnering AI Alignment consists of proclaiming AI Ethics precepts and aiming to get everyone to abide by keystone Ethical AI principles. Another approach entails enacting new AI Laws that will provide both civil and criminal forms of punishment, seemingly prodding those that make or use AI to be more mindful of what the AI is doing. A combination of these “soft law” (AI Ethics) and “hard law” (AI Laws”) are rapidly emerging to cope with the deluge of AI that is undeniably in the AI For Bad camp.

Not enough, some contend.

There needs to be more on the line.

Can you guess what else could be done?

We might need an AI Alignment Tax to keep AI from going in the wrong direction.

It’s a proposal that raises eyebrows and either is often immediately spurned or conversely warmly embraced.

So far, there hasn’t been much substantive discourse on the taxing of AI, though my earlier predictions remain steadfast that we would arrive there sooner or later. Taxes always seem to come up in nearly any endeavor. If there aren’t any taxes on something, the realization will ultimately arise as to why that particular thing has escaped being taxed.

We all are being taxed, so why shouldn’t AI also get taxed? You might find some people are quite irked that humans can be taxed and at the same time we allow AI to skate on through. Irksome. Ought to be the reverse, you might imagine, such that we should tax AI and stop taxing humans (raise your hands if you vote that to be the law).

No more free lunches for AI.

One basis for taxing AI would be the rather obvious aspect of a large source of tax revenue. When AI makes money, taxes are going to be right around the corner. We have wealth taxes. We have estate taxes. We have property taxes. We have inheritance taxes. Anything that has the golden touch of money is bound to be rife for being taxed.

Rightfully, ergo, the list of taxes would only be considered complete if we also had AI taxes.

Whoa, some say, if you start taxing AI then we are potentially going to harm or maybe even kill the golden goose. AI can be used for the betterment of humanity. The moment you start slapping taxes on AI, the AI For Good might dry up. It will be overly costly to try and produce AI that does goodness such as AI trying to cure cancer, solve climate change, and aid in a slew of earthly vital pursuits.

The reply is that indeed a tax overtly on all AI might be a bit overboard. Instead of solely doing a concerted money grab, perhaps the tax can be used to steer AI in a preferred direction. Let’s set up a tax on AI that drives the manner of AI development. Use an AI tax to focus on getting rid of AI For Bad.

Voila, proceed to unveil an AI Alignment Tax.

Mull that over.

Is an AI Alignment Tax a grand idea that ought to be heralded from the rooftops, or is it a horrid idea of an immensely unworkable nature and we should drop it like a lead balloon?

Let’s unpack the matter to see what we can decide.

Before leaping into the AI Alignment Tax topic, I’d like to first lay some essential foundation about AI and particularly AI Ethics and AI Law, doing so to make sure that the discussion will be contextually sensible.

The Rising Awareness Of Ethical AI And Also AI Law

The recent era of AI was initially viewed as being AI For Good, meaning that we could use AI for the betterment of humanity. On the heels of AI For Good came the realization that we are also immersed in AI For Bad. This includes AI that is devised or self-altered into being discriminatory and makes computational choices imbuing undue biases. Sometimes the AI is built that way, while in other instances it veers into that untoward territory.

I want to make abundantly sure that we are on the same page about the nature of today’s AI.

There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).

The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).

I’d strongly suggest that we keep things down to earth and consider today’s computational non-sentient AI.

Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

Be very careful of anthropomorphizing today’s AI.

ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.

I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.

Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now-hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern-matching models of the ML/DL.

You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.

Not good.

All of this has notably significant AI Ethics implications and offers a handy window into lessons learned (even before all the lessons happen) when it comes to trying to legislate AI.

Besides employing AI Ethics precepts in general, there is a corresponding question of whether we should have laws to govern various uses of AI. New laws are being bandied around at the federal, state, and local levels that concern the range and nature of how AI should be devised. The effort to draft and enact such laws is a gradual one. AI Ethics serves as a considered stopgap, at the very least, and will almost certainly to some degree be directly incorporated into those new laws.

Be aware that some adamantly argue that we do not need new laws that cover AI and that our existing laws are sufficient. They forewarn that if we do enact some of these AI laws, we will be killing the golden goose by clamping down on advances in AI that proffer immense societal advantages.

In prior columns, I’ve covered the various national and international efforts to craft and enact laws regulating AI, see the link here, for example. I have also covered the various AI Ethics principles and guidelines that various nations have identified and adopted, including for example the United Nations effort such as the UNESCO set of AI Ethics that nearly 200 countries adopted, see the link here.

Here’s a helpful keystone list of Ethical AI criteria or characteristics regarding AI systems that I’ve previously closely explored:

  • Transparency
  • Justice & Fairness
  • Non-Maleficence
  • Responsibility
  • Privacy
  • Beneficence
  • Freedom & Autonomy
  • Trust
  • Sustainability
  • Dignity
  • Solidarity

Those AI Ethics principles are earnestly supposed to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems.

All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI are subject to adhering to the AI Ethics notions. As prior emphasized herein, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

I also recently examined the AI Bill of Rights which is the official title of the U.S. government official document entitled “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People” that was the result of a year-long effort by the Office of Science and Technology Policy (OSTP). The OSTP is a federal entity that serves to advise the American President and the US Executive Office on various technological, scientific, and engineering aspects of national importance. In that sense, you can say that this AI Bill of Rights is a document approved by and endorsed by the existing U.S. White House.

In the AI Bill of Rights, there are five keystone categories:

  • Safe and effective systems
  • Algorithmic discrimination protections
  • Data privacy
  • Notice and explanation
  • Human alternatives, consideration, and fallback

I’ve carefully reviewed those precepts, see the link here.

Now that I’ve laid a helpful foundation on these related AI Ethics and AI Law topics, we are ready to jump into the heady topic of AI Alignment Taxes.

The Rise Of AI Alignment Tax To The Rescue

I’d like to begin this exploration by first considering a technological form of taxation.

Those of you that are AI developers are likely already familiar with AI-related taxes of a technological sort. This is different from the monetary-oriented taxing of AI. Nonetheless, you could suggest that there is an analogous aspect involved.

When developing an AI application, you might be worried that various tweaking or design actions could end up serving as a kind of technology system performance “tax” on the AI app. In techie terminology, something can be a performance hit. This hit can be construed as a form of tax. Recall too that I earlier herein mentioned that the meaning of the word “tax” can be wide-ranging, including being able to refer to non-monetary consumption of resources or energies.

Consider an example of performance hits or “taxes” associated with AI technical performance aspects. We’ll conveniently look at this in the context of an AI alignment-related bit of research. Suitably chosen and allows for a contextually comfortable journey.

In a research study posted last year entitled “A General Language Assistant As A Laboratory for Alignment,” researchers closely examined how LLMs (Large Language Models) and generative AI might be aligned toward human values: “Given the broad capabilities of large language models, it should be possible to work towards a general-purpose, text-based assistant that is aligned with human values, meaning that it is helpful, honest, and harmless. As an initial foray in this direction, we study simple baseline techniques and evaluations, such as prompting. We find that the benefits from modest interventions increase with model size, generalize to a variety of alignment evaluations, and do not compromise the performance of large models” (posted on arXiv).

The sentiment is that there are numerous ways to garner alignment, of which some might be better than others. The betterment can be associated with achieving greater alignment. At the same time, unfortunately, the betterment in alignment might dampen AI performance. If you are devising AI that turns out to be sluggish and unable to function in a time-needed fashion, you are potentially paying one price to get some other benefit. In turn, this raises questions as to whether the betterment being sought is worth the performance price being paid.

You don’t want to rob Peter to pay Paul if you can otherwise reasonably avoid doing so.

As stated in the paper: “A general concern about alignment is that it may impose a ‘tax’ on performance, such that aligned models may be weaker than raw or unaligned models. In the case of prompting and context distillation, it is straightforward to evaluate this question directly by performing evaluations with and without the prompt” (ibid).

In case you are wondering what their viewpoint about alignment consists of, here’s a quick snippet of their indication (see their paper for the full details):

  • “Alignment requires distinguishing between ‘good’ and ‘bad’ behavior. There are several different training objectives that may be used to accomplish this: (1) Imitation Learning: Here we simply train language models to imitate ‘good’ behavior via supervised learning with the usual cross-entropy loss. (2) Binary Discrimination: Given a sample of ‘correct’ behavior and a sample of ‘incorrect’ behavior, train the model to distinguish between the two. (3) Ranked Preference Modeling: Given a dataset of samples whose overall ‘quality’ is ranked in some way, we train models to output a scalar quality score for each sample whose value matches the ranking as closely as possible. For simplicity we focus on using pairs of ranked samples (i.e., binary comparisons), and we train our models to assign a higher score to the ‘better’ sample in each pair” (ibid).

According to this particular research study, and keep in mind that particulars matter since you ought to not wildly generalize thereof, the paper indicates that a limited or small performance tax was incurred and was of a negligible cost: “We find that prompts induce favorable scaling on a variety of alignment-relevant evaluations, impose negligible ‘taxes’ on large models, and can be ‘context distilled’ back into the original model” (ibid).

I trust that you can see how that type of tax is comparable to the idea of a tax being a type of overhead or added cost to something. In this instance, a potential performance hit. You’ve thusly now been introduced to the notion of a technological-focused form of an AI tax.

We turn next to a monetary tax.

Suppose that someone is devising an AI application. They do so without any consideration as to whether their AI aligns with human values. Is it shocking to you that they would be so blind or even possibly devilish?

Maybe the AI developers don’t care about human values, or perhaps they are so steeped in advancing AI that they aren’t cognizant of considering human values in their designs and coding. It could also be that they figure that later on, once the AI is working, they will try to embellish the AI will an alignment with human values.

Lots and lots of reasons can be conjured up.

Anyway, assume that society wants those AI developers and those that field AI to be more aware of AI alignment. By gosh, we want that at the front and center of what is being done with AI. We need to make sure that those involved in making and using AI are abundantly alerted that they had better align their AI with human values.

A potential stick and a carrot could be the focal point of an AI Alignment Tax.

We legally establish a tax that says AI must be aligned with human values. If your AI doesn’t align with human values, you get hit with an extremely hefty tax. If your AI does align with human values, perhaps the tax is minimal or not imposed at all.

Envision that the AI tax is widely publicized. Companies are made aware of the AI Alignment Tax. AI developers are made aware of the AI Alignment Tax. It makes big headlines. News coverage talks endlessly about which entities are paying huge AI Alignment Taxes and which ones aren’t. Society gradually drifts toward only using AI that has not had a noticeable AI Alignment Tax.

We might also provide a legal means to defer or reduce the AI Alignment Tax, once imposed, depending upon whether the AI maker or AI-using firm can fix the lack of AI alignment. In that sense, it isn’t so much that society wants the money associated with the tax and instead is trying to drive compliance toward well-aligned AI.

For example, a firm makes use of rottenly misaligned AI. A humongous AI Alignment Tax is imposed. If the company just keeps on using that AI, we are now in a deeply troubling space. They were willing to eat the tax and keep on employing badly aligned AI. This is not what we wanted. The aim is to get away from misaligned AI.

The AI Alignment Tax is both a carrot and a stick. Those that are genuinely attaining AI alignment will have nary much of such a tax imposed upon them. Meanwhile, AI developers and those that are using AI that is misaligned will get the tax stridently placed upon them and their AI. The cost therefore of the AI will go up, since there is now this added tax on top of whatever else they have spent to make and field the AI.

Furthermore, for those that decide to evade the tax, the government can rightfully come after them. Those dastardly people producing or utilizing AI that is not aligned with human values will have the government coming for them. You know how those taxing agents are, they will stop at nothing to get those that have transgressed and aren’t paying their due taxes.

All in all, this seems pretty sensible.

The devil though is decidedly in the details.

We need to soberly ponder some of the details so that you’ll be aware of the practicalities and impracticalities leveled at setting up AI Alignment Tax schemes:

  • How will the AI Alignment Tax be stipulated in the tax laws?

If the AI Alignment Tax is poorly stipulated such as being loosey-goosey, you could have all manner of software suddenly being subject to this new tax. Yikes, some would say, you are going to crush the software-making world. For my analysis of how vital it is to properly specify the legal definition of AI in our laws and contracts, see the link here.

The other side of that coin is that the AI Alignment Tax is so narrowly defined that very few if any AI applications would get included. That’s bad. The free lunch for AI would seemingly continue unabated and AI For Bad is roaming unfettered. Furthermore, clever AI developers and those using AI would likely try to do whatever they could to make their AI seem as though it isn’t within the AI Alignment Tax realm. An entire cottage industry would form of how to alter or mask your AI so that it averts being subject to the ”dreaded” AI Alignment Tax.

  • What would be the financial basis or scheme for calculating the AI Alignment Tax?

Assume that we were able to nail down a suitable stipulation of what AI applications come within the scope of the AI Alignment Tax. Great! But we next have to figure out what the tax amount will be. This is going to be hard to discern.

Perhaps we could use a set of metrics:

  • Degree of AI Alignment: This aids in identifying how close or how far the AI is aligned (or misaligned) to human values
  • Magnitude of AI Impact: This covers whether the AI is significant or relatively insignificant in terms of the nature of its impact on society and those that use or are affected by the AI
  • Special Cases: This might indicate that based on criteria such as whether the AI is for academic use, or purely research-based, etc. that it is taxed differently than other use cases
  • Exclusion: This might indicate that depending upon the situation, the entity or person might be excluded from the tax, such as certain types of non-profits and so on
  • Other

Generally, the idea is that if the degree of AI alignment is assessed as good (being well-aligned), this reduces the tax amount, while if the AI is badly aligned the tax goes up, perhaps sharply so. In addition, the magnitude of the AI impact comes to play. If the AI has almost no magnitude of impact, perhaps even if the AI alignment is out of whack, we would in any case keep the tax low. These two factors would interplay with each other in terms of calculating the tax amount.

Not everyone would be subject to some oversimplified scheme. There might be a special cases category to allow for mitigating aspects of the AI Alignment Tax. Some instances might also warrant a complete exclusion from being covered by the AI Alignment Tax.

  • What are the rationale and key guidelines associated with an AI Alignment Tax?

The rationale or logical basis of an AI Alignment Tax is to provide a societal policy lever to focus on garnering AI that is AI For Good and averts AI For Bad. Those that develop AI and those that field and even use AI will presumably be desirous of minimizing their potential AI Alignment Tax burden, thus, driving them toward AI For Good.

In brief, this might be one of many such tools or paths toward protecting humankind from adverse AI.

An AI Alignment Tax will likely only be viable if it is relatively easy to understand and calculate. If it is complex or convoluted, the resultant confusion and noise will undoubtedly drown out the true value of having this particular societal lever for guiding AI. Per Einstein’s quote that was earlier mentioned, an Al Alignment Tax should not be the hardest thing in the world to understand. It should instead be one of the easiest things to understand.

For purposes of being a continual reminder about averting AI For Bad, an AI Alignment Tax would presumably be incurred on an annual yearly basis. Any longer time span might be insufficient to be a diligent prodder. Furthermore, if an AI application changes materially over the course of a year, it might be subject to immediate recalibration associated with the AI Alignment Tax aspects. Doing so would catch those that might seek to game the tax by being seemingly AI For Good at tax time and yet allowing AI For Bad once the tax has been ascertained and paid.

To ensure fairness, there would have to be a mechanism to provide for financial penalties toward those that either unlawfully skirts around the AI Alignment Tax or that evade it entirely. There would also need to be enforcement including possible criminal charges too.

The scope of an AI Alignment Tax might vary geographically and via jurisdictional boundaries. Some believe that a federal AI Alignment Tax might be appropriate, while others aim for a state-level or local-level rather than a federal-level version. If the idea takes off, probably such a tax might arise at all levels, including internationally as well.

Again, the devil is in the details.


Not everyone is keen on these various floated AI Alignment Tax schemes.

Crazy idea, some exclaim.

Won’t work, others emphasize.

One highly vocalized worry is that an AI Alignment Tax might spur a slew of slippery slope downfalls.

For example, if we are going to have an AI Alignment Tax, you might right away start seeking some non-AI related taxes for any kind of software or systems (i.e., the Non-AI Alignment Tax). This is the “bursting dam” worry. Once any such tax is set up and accepted, a tsunami of other taxes is bound to be thought of and imposed.

Another expressed qualm is that the whole matter might devolve into a money grab.

Forgetting or overlooking the initial premise of the AI Alignment Tax, some taxing agencies are undoubtedly going to become preoccupied with the tax revenue that can be raised. Whereas the original precept is that the AI Alignment Tax exists to align AI with human values, shenanigans can end up shifting priorities solely toward a money-raising entailment that has little to do with alignment considerations.

You can readily imagine a bloated tax-collecting bureaucracy being formulated that loses sight of the mission associated with an AI Alignment Tax. Heads down and not caring a whit about AI alignment. The exercise becomes a mechanized mindless pursuit of tax dollars.

A final thought on this topic, for now.

I’m sure that you are familiar with the famous saying by Benjamin Franklin when he indicated in 1789: “Our new Constitution is now established and has an appearance that promises permanency; but in this world nothing can be said to be certain, except death and taxes.”

The two certainties in life are said to be death and taxes.

We might add an additional certainty.

The three certainties in life are death, taxes, and AI.

Let’s be certain as best we can that the certainty of AI is aiming toward AI For Good and not toward AI For Bad. Certainly, we can all at least agree with that.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *