ICoD exclusive interview with kyle rath on design and AI

Kyle Rath's research focuses on AI within the space of design, ethics and good governance
Kyle Rath's research focuses on AI within the space of design, ethics and good governance

02.06.2025 Editorial

What would it would mean to focus less on the technological speed, efficiency and development of AI, and more on effective communication and professional ethical practice as the primary driver? In what ways does human intelligence matter when designing-with with AI? ICoD interviews design academic Kyle Rath on turning our attention to the meaningful use of AI.

As part of ICoD's professional mandate, the Council works to provide Members with Best Practices guidelines. In this exclusive interview with academic Kyle Rath, lecturer at the University of Pretoria (South Africa) and Non-Excecutive Board Member of ICoD, we discuss how to work critically with AI as a prelude to how Members will work together in-person to draft a workable living document of insights at the upcoming Regional Meeting 2025: The Next Code of Conduct in Ljubljana on 6 June 2025.


INTERVIEW

01  Can you tell us a bit about yourself and how you came to work at the intersection of design and AI?

I am a communication designer and lecturer at the University of Pretoria, South Africa, teaching under- and postgrad for the past 12 years. Initially, my focus was in Typography, on which I completed my MA. Essentially, I followed the well-established idea that typography holds a kind of intrinsic symbolism. Helvetica for simplicity, Cooper Black – laid-back, or an italic, curvilinear font for romance. A heavier, rectangular font is more masculine, while a softer, organic letterform is more feminine. And of course, the multitudes of other sorts of genres, classifications and stereotypes. The point is that even if we wanted to rebel or turn these inherent symbols on their head (using a softer font for a heavy metal band, for instance), we do so from a place of pointing to, or riffing from the ‘original symbolism’ – very much a kind of ‘mythic narrative’ woven into letterforms.

And it makes sense – there is sound, established science to suggest that we inject meaning into things even when there is no intrinsic meaning. It’s a human, perceptive phenomenon. Whether we see faces in clouds, or in the design of a car’s front, it’s how we make sense of the world – we add meaning and make patterns.

But then this weird, non-human thing, called AI popped up (it has been around for decades, but I’m referring to contemporary AI, from around 2015 onward). Although it is very much reliant on pattern recognition, it is not human. By this I mean that, unlike a human, it is trying to establish pattern reliability (repeatable, and non-subjective patterns) by crawling through massive amounts of data, retesting, recycling and then reestablishing best-match patterns. Humans don’t do this – we rely on anecdotes and our own frame of reference. At best, we seek ‘sample sizes’ and then make deductions based on that. For obvious reasons, this means that AI is far more accurate and far faster than we will ever be. And although this seems to be an ideal from a productivity point of view, there are clear and immediate problems with this kind of processing. I am not talking about the problems AI has with generating fingers, or readable typography (this will be resolved very quickly). I am talking about the much deeper challenges that are not as easy to solve and navigate. This is where AI sparked interest for me.


"[...]my greatest concern is uncritical use of AI. Total dependency on AI. I am not opposed (in fact, I encourage) to my students use AI in their work."

 

02  How do you see designers both seduced by, and resistant to, AI, when it comes to using AI as a design tool? What advantages and cautionary tales stand out for you in this moment? How do you integrate AI in your process?

As with any new/emergent technology, there are going to be several thousand stories on advantageous, ‘brilliant’ ways of bringing AI into design work, and of course, the horror stories as well. There are too many to go through, and we will see more and more emerge as AI is adopted further. Broadly, we have concerns over what the role of the designer will be in the near future, what are some of the ethical issues with using AI, concerns of surveillance, job displacement and so on. On the other hand, there are a plethora of opportunities mostly surrounding speed and production. 

What I will say, though, is that my greatest concern is uncritical use of AI. Total dependency on AI. I am not opposed to (in fact, I encourage) my students use AI in their work. But when a designer takes a total back seat, and lets the algorithms do all the work, unchecked, that is where I take pause. It is even more alarming when I see this happening in some of the biggest corporations and design companies. And it is happening … all the time.

 

03 Ted Chiang (American sci-fi writer and TIMES 100 Most Influential People on AI) has famously stated that “Chat GPT is a blurry JPEG of the internet”. His point is, we only ever get more and more degraded, internet-scraped outputs from AI when we use them for writing, art, and now: design. What is your position on this? To what extent have we moved past this idea that AI is limited by the data of the internet, when we know AI depends so much on who trains it?

AI is not human (it is not sentient, yet). As much as we would like to think we are training a better and better AI, we are currently only training more and more efficient AI. We are still very much caught up in this idea that efficiency and accuracy are the measure of great AI. And it is indeed aspects of efficiency and accuracy that seem to worry us – and for a variety of reasons (job security likely being the major issue here). It is not surprising, as we are taught from a young age to care more about the sciences than the arts. And so, quantitative progress seems so impressive, in fact, the most impressive and most valuable aspects surrounding most any engineered tool (from the printing press to the dawn of Adobe and the Internet, these have always been the most prized aspects). It is, but there is much more to the story. What we often overlook is arguably just as consequential. I’m talking about the errors that AI makes in terms of communication and how this impacts societies on a global and local level.

AI is limited, but not by the data sets we use. Sure, we have AI that can sift through more data than others, and those that can produce more suitable results, but again, what we are measuring here is accuracy and efficiency. If we position this as the limits of AI, then we could say that AI is totally limitless so long as the required resources (data storage, processing power and so on) are there.

Where we need to turn our attention, as designers, I think, is how to make meaningful use of AI. As I mentioned earlier, meaning is inherent in human processing. In the case of AI, we have a tool that is quite powerful (relative to most other technological advancements in the past few centuries), and so, the responsibility of the designer, I believe, is even greater now.


"Designers are coming in far earlier in the development stages of major application ideation, to conceptualise what a piece of software, or what an application must do. It’s not about developing a thousand and one features for an application, but to decide what features are most useful, and, more importantly, what these features should be in the first place."


It is, of course,e concerning when I see the number of (especially smaller) agencies having to close owing to AI-induced redundancy (again, this has happened with every major technological advancement – certain jobs are made redundant. The trick is always to upskill). However, it is also encouraging to see communication designers coming on board in major AI advancements. Especially in fields like robotics and software engineering. Designers are coming in far earlier in the development stages of major application ideation, to conceptualise what a piece of software, or what an application must do. It’s not about developing a thousand and one features for an application, but to decide what features are most useful, and, more importantly, what these features should be in the first place.

 

"In academia, particularly, I see a lot of interest in questioning ethical use of and unmasking inherent biases and stereotypes built into AI ... It has been like this for every technology we have ever produced and these stereotypes will continue to be injected into every future technology. The difference is, we are dealing with a far more powerful and rapidly circulating medium now."


04  What stage are we at in envisioning an ethical relationship when building social spaces assisted by AI? How much of a role will AI play as a co-author with human designers in designing these social spaces? What responsibilities must be considered and shared between both?

The ethical use of AI is my particular area of interest. This is a complicated and ‘big issue’ conversation. Again, there are concerns surrounding copyright and who is ‘correcting’ biases embedded into these rapidly developing algorithms. Can we even control them? Is this going to turn out to be a terminator movie, but with a different ending? Of course, we need to remain vigilant on these issues.

To narrow in a little, I will speak from my research interest here as a communication designer. AI is quite poor at communicating. Yes, it can produce millions of images, and these images will become more and more accurate (realistic) and perhaps ‘believable.’ It is bulk communication, but not necessarily good communication. Yes, it is true that any visual communicates. As I said before, we can (and mostly do) read meaning into anything we see. But AI creates a kind of simulacrum – it generates hundreds of millions of images that make no immediate sense because their signs are more-or-less randomly picked from a plethora of references. A prompt is, essentially, extremely vague. And so it is left up to AI to fill in all the gaps. You might ask it to generate a cat, in the style of Picasso, wearing a houndstooth coat. AI is left to make decisions on things like light source, background elements, composition, hierarchy and all the other things that humans are far better at conceptualising.

This is a silly, oversimplified example, but I do not see AI replacing the core function of communication designers any time soon. To communicate well, you need to highlight (edit, construct, add) shared or relatable value in a piece of visual communication. When we say, something is meaning-ful, we don’t mean that it is full of meaning, but that the meaning is tailored to our frame of reference. We are essentially always more attracted to communication that is a) acceptable, b) relatable and c) testable (if even anecdotally).  

In academia, particularly, I see a lot of interest in questioning the ethical use of and unmasking the inherent biases and stereotypes built into AI. Clearly, there are many biases that have existed in our social spaces that have simply been transferred to new technologies. As we learnt fairly recently with Google’s Gemini, when the AI was trained on a far more liberal leaning data pool, the image outputs were immensely skewed to the point of essentially rewriting history through imagery. This was an easy one to critique, but we need to remain vigilant on less obvious versions of this kind of thing. As you might imagine, if you were to input the prompt: ‘Show me a graphic designer at work’ into most any of the larger generative AI programs, you only need to generate 20 or so images to start identifying all the age-old archetypes (mostly inaccurate) coming to the fore. It will generate mostly males, since they are traditionally the ‘breadwinner;’ they will be ‘European-esque’ (white skin, etc.), wearing glasses and some kind of in vogue hairstyle, perusing a colour chart with their (six) fingers. And to complicate things, it can make incongruous errors too, which, when circulated and reinforced, could lead to further communication issues.

It has been like this for every technology we have ever produced and these stereotypes will continue to be injected into every future technology. The difference is, we are dealing with a far more powerful and rapidly circulating medium now.


05  As a potentially ‘sentient’ life form, AI occupies a distinct place among those intelligences we normally call human. What about AI ‘intelligence’ must be considered more carefully as we move forward?

You will find various answers to this, depending on how far into the future of AI we look. The end game, I think, is whether we ultimately move from narrow to AGI (artificial general intelligence).

At the moment, we have something in between, but certainly more to the narrow side. AI that seems to be edging toward being sentient (the impressiveness of ChatGPT or how seamlessly tasks are automated for you on your iPhone), is essentially as a cluster of narrow AI that is tied together by code. Each AI is really good at doing one narrowly tailored thing, very quickly. But it can’t do more than that one thing. And so, most companies make it appear that the AI is able to ‘do-it-all’ for you. Essentially, what is happening is that when one linear AI has finished its task, a piece of code kicks in and tells the next one to complete the next and so on until a chain of narrow AIs appears to us as a fully-fledged ‘being’.

We are able to tweak nuances in different kinds of AI. We can still assess what they are good at and where we can work at training them better, or adapting them in new ways, or ways that fill gaps left by other AI.

At this point, the role of the communication designer is fairly clear. AI at this level is not really ‘intelligent’. It is not ‘thinking’—it is only mimicking human thinking. And so, we are there to help direct the AI, to best replicate our own thinking and to produce better communication outputs for our needs. We are therefore there to amplify its best features for productivity and accuracy, and edit and clean up when it has made a mess. Again, the role of the designer is quite obvious.

When or if we get to the stage of a truly general AI (or AGI), things get more complicated. And when I hear CEOs of the biggest corporations, and influential thinkers in the field of AI refer to the ultimate goal of reaching a point of AGI, it gives me pause. I do think AGI is severely underestimated and underplayed, not only in design, but in most fields. At this point, AI is no longer replicating human thinking, but is matching and surpassing it.

The next question is then logically, ‘what does AGI look like?’ Some say it is when a computer can do any and all intellectual tasks, all at the same time, better and faster than a human can. I’ve also heard it defined as a kind of ‘transfer learning’ – something humans are uniquely good at – the ability to think about and consider multiple (often unrelated) ‘things’ at once and develop an otherwise ‘unexpected’ result. Steve Wozniak famously said, “it’s when a robot can make you a cup of coffee, in a stranger’s kitchen.”

In the context of design, AGI wouldn’t just be your basic image or text LLM that tries to give you ‘best-match’ results based on retuning and refining prompts. You wouldn’t be using AI to ‘help’ conceive of a visual style for an animation, or the tone of a brand. AGI will come up with not only the best concept from the jump, but totally surpass even the need for a brief. It will devise, produce and regulate any system, company or process before it even reaches the human hand. And it will have done this before you’ve even thought about thinking about it.


In the best-case scenario, AI could help us find solutions to large-scale problems like climate change, cancer, hunger or wealth inequality. And it would work with us, in our case, designers, to conceptualise, direct and moderate the most ethical ways of doing so.


In the best-case scenario, AI could help us find solutions to large-scale problems like climate change, cancer, hunger or wealth inequality. And it would work with us, in our case, designers, to conceptualise, direct and moderate the most ethical ways of doing so. The worst-case scenario surpasses design, as it likely means we enter a kind of Skynet scenario (a fictional AI system that becomes self-aware and perceives humanity as a threat, leading to a catastrophic war against humans). It may be catastrophic.  

AGI might be a few years away, or perhaps a few decades. It may not even come. Moreover, I don’t know what AGI means for humanity, let alone design. But I agree with many of the leading minds that we are not ready for it. I think this is why focusing on ethical and responsible use of AI is a far more pressing area of investigation, that should at least keep up with our apparent ‘want’ to generate more and more ‘productive’ AI.

 

06  The focus of this Regional Meeting is to discuss changes to the ICoD Professional Code of Conduct for designers. What angles of discussion do you anticipate for this meeting? Are there common fears and/or points of innovation you notice that are shaping how designers think and work? What do you think is the main point of discussion with regards to making sure the Code responsibilizes designers in working with AI?

I don’t know so much if it is a ‘code’, but more of a guide to what the near future of AI might mean for design, but also, importantly, to consider real, on-the-ground use cases of AI currently being applied in design practice. It is a problem, I would say, of irrelevancy, if you offer a code of conduct from a sort of ‘all knowing’ podium of authority. I don’t think ICoD should be viewed as the surveillance tower of design. Yes, you can read the latest expert reviews and provide an eloquently crafted set of rules-to-design-by (which, by the way, AI would probably do better than we could). What I am most interested in is hearing some of the concerns and fears, current applications and what the thinking might be around near-future innovations in design. From there, we can draft a workable living document of insights, rather than a code, per se.

As I have mentioned, my interest lies in the ethical and responsible use of AI. I think this aligns quite nicely with what a ‘code’ of conduct should hope to achieve. Not so much a rule book, but a guide on things to consider before jumping on the AI bandwagon. And we will all have to jump on the wagon at some point soon. I’m hoping the insights gleaned will prove helpful for some in the design community, when they decide it’s time to do so.


For the workshop at the ICoD Regional Meeting, the idea is to try to get participants from different organisations, agencies and educational institutions to give input that will hopefully help craft ICoD’s ‘code’ of conduct surrounding AI. The more input we get, hopefully the better our document will provide meaning-ful navigation around pressing issues surrounding AI and design."


07  For the workshop at the Regional Meeting (RM), you will use a set of prompts to consider how designers think they would be using AI versus the reality of how they are using AI. And in a way, this will generate thinking about the future of AI generally. Can you talk about the format of your workshop for the RM and your approach to working with AI as a learning tool?

Again, the idea is to gain insight into some of the ‘real’ applications and thinking around AI, currently in design. We have therefore developed a workshop that is centred around sharing some of these insights. We’ll do this using a few prompts on the major concerns surrounding AI in design. From issues such as current use and impact of AI, to the value of AI co-creation; the role of the designer; issues surrounding ethics, responsibility and surveillance; what the future of AI in design might look like from a point of view of innovation and threat; AI skills development and education; and AI dependence.

We’re going to be dividing the participants into 9 groups. Each group will be given a prompt on one of these issues raised above. We’ll be asking them for examples of the kind of issue that are addressing, as well as what they foresee to be a solution/workable insight. We’ve had some pretty cool ‘cards-of-conduct’ designed and made for this, that will help navigate this process. The idea is to try to get participants from different organisations, agencies and educational institutions to give input that will hopefully help craft ICoD’s ‘code’ of conduct surrounding AI. The more input we get, hopefully, the better our document will provide meaning-ful navigation around pressing issues surrounding AI and design.

 

08  What are you working on next?

Right now, my primary focus is in research on ethical and responsible use of AI in design. This has become my primary interest in recent years, specifically because I have seen the need for a counter to the focus on ‘speed/efficiency’ and ‘quantitative realism/accuracy’ of AI. As I mentioned, the focus seems to be more on the technological development of AI and less on what this means for effective communication as the primary driver for design.

For me, next year is quite packed with exciting opportunities for AI exploration. There are a few conferences, workshops, exhibitions and on-the-ground research opportunities. I am also a consultant for two major global companies who are interested in the ethical and responsible use of AI. This is a big one – I am hoping there is some buy-in there. And of course, I am thrilled at being welcomed as a new non-executive board member of ICoD in my capacity as a scholar on AI. Specifically, because my interest is in ethics, I want to engage with AI design-in-the-wild and at larger corporate scales. Moreover, the audience size and influence of ICoD are significant, and so if I want my work to be useful, then ICoD is a great platform.


May 2025 

Interview by Alisha Piercy, ICoD Communications Officer

 

KYLE RATH BIO

Kyle Rath holds a PhD from the University of Pretoria, South Africa, and lectures both undergraduate and postgraduate Information Design. He is genuinely fascinated by all aspects of design, however, in his current research, he takes a look at signification that describes, from a theoretical and practical point of view, the way in which AI (and particularly generative artificial intelligence) has come to dominate our post-structural society. His research is made manifest in his teaching and presentations on AI within the space of design and good governance (ethics). In particular, how AI is shaping and will continue to shape the way we engage with our technological and social spaces. He also currently sits on the advisory board and has guest edited for the art and design journal, Image & Text and is a Non-Executive Board Member of ICoD.

 

Kyle Rath's research focuses on AI within the space of design, ethics and good governance
Kyle Rath's research focuses on AI within the space of design, ethics and good governance

related
articles