Before coming to college, I had a vague, yet sufficient, definition of art. Thanks to academia, I had to let go of all my presuppositions about art, and pretty much everything else I thought I knew.
I started thinking about art and its place in the industrial world when I took a class at Columbia on jazz. When we were learning about John Coltrane and his amazing ability to phrase his melodies, the professor showed us a YouTube video of a robot playing Coltrane’s “Giant Steps.”
Students burst into laughter. Some criticized the performance, saying that the robot, despite its ability to “breathe” into the instrument, could not generate vibratos, an essential technique used by human performers. I knew that for the purposes of the class, I was supposed to appreciate Coltrane and his performance more by rejecting a robot’s failed attempt. But I was still bothered by this shocking footage—and apparently still am. I thought to myself, “That can’t be called jazz. That can’t be called art.”
I wonder how many other projects are in progress now, where scientists get together and create machines to imitate humans. I also wonder what Plato would think of humans of the 21st century. We “imitate the gods” not only via poetry but also through science by engineering objects that undergo mimeses of us!
Right after this traumatic experience, I debated with a computer science major concentrating in artificial intelligence about this matter: Can machines produce art?
He argued that given a computer with immense data space and enough information on the quantifiable variables that make up the act of a human playing a violin, it is possible to program such a robot. And the ability to tell the difference between the robot and a human performer will be like the ability to tell the difference between mass-produced wine from an automatic factory and homemade wine.
I didn’t want him to go on. I was convinced to the extent that I believed such a future was possible. But being a violist myself and having grown up listening to beautiful music, I wanted to make the claim that even if a virtuoso robot does come along, the human performer will always be the real artist.
Because what if there’s no way to program such a thing as sublimation, where the artist transfers all of his/her internal energy into the performance? On the other hand, how do we know that the audience can pinpoint this musician’s intent?
The question of the nature of art (known as aesthetics in philosophy) is tricky. Unlike ethics or politics, art is not something we appreciate, teach, or do out of obligation. Artists have existed and people can still appreciate artwork of centuries past, but the purposes and consequences of art remain up in the air for artists and philosophers alike.
David Hume, being an empiricist, believes that sentient experience is the only source of human values, including the arts. But perhaps our tastes or preferences for things can be biologically predicted, and maybe, given a very big computer, our preference for what sounds are beautiful can be predicted, too.
This interpretation troubles me, because it makes humans sound completely calculable, simple, and dumb. Brave New World, anyone? Perhaps, if you do happen to find the robot’s performance fascinating, that’s okay, because it’s all a part of a Hegelian dialectic path to a higher culture. Hegel would call this the Zeitgeist, or “spirit of the age,” an appropriate attitude for a time when humans are practically symbiotic with computers (computers rely on us, too, for turning the switch on).
But what makes this question even more confusing is that some human-made music imitates machinery. Last year, the Columbia University Orchestra performed Alexander Mosolov’s “Iron Foundry” (also known as “machine music”), which was an imitation of the sounds of a factory. The violists were told to make screeching sounds like metallic wheels scratching each other, and other sections represented other mechanical parts. Critics might interpret the piece as the glorification of industrialism. But I wonder: If robots ever actually develop sentiments and a sense of curiosity, and if they find a YouTube video of the CUO playing the Mosolov, would they question the legitimacy of it?
And I think that’s what this problem comes down to. Not whether humans can tell the difference between human art and machine art, but rather, whether robots will ever develop any sentiment at all. If art is about expression, the robot must have something to express, as well as the desire to express it. In this case, how would a computer scientist program these sentiments? What would be the quantifiable variables?
Then, I suppose, we can debate about what sentiment is, but you’ll have to wait two more weeks to hear my opinion on that.
Yurina Ko is a Barnard College junior majoring in philosophy. She is a senior editor of the Columbia Political Review. 2+2=5 runs alternate Mondays. firstname.lastname@example.org