New Horizons with Artificial Intelligence: Interview with Ms. İlkem, Lecturer at Sabancı University
- Ruya Gürbüz
- May 1
- 7 min read
We extend our heartfelt thanks to Ms. İlkem for taking the time to share her insights in this wide-ranging and deeply informative conversation. Her interdisciplinary approach and dedication to ethical, responsible AI use in education provide valuable guidance for students, educators, and institutions navigating this rapidly evolving field.
Rüya: First of all, could we get to know you a little?
Ms. İlkem: Of course, with pleasure. I have been working as a lecturer at Sabancı University for about 10 years. After completing my undergraduate degree in English Language and Literature, I pursued a master’s degree in Cultural Studies. I have always defined myself as a generalist and an interdisciplinary scholar, meaning I work across multiple fields. For many years, I have taught courses such as academic writing and research skills. I draw on various disciplines in my teaching, but for the past three years, I’ve especially focused on generative artificial intelligence.
Rüya: How did your interest in artificial intelligence begin?
Ms. İlkem: In 2022, when OpenAI launched ChatGPT, I became interested in the ethical and responsible use of AI in education and the business world. To improve myself and engage more deeply with the field, I enrolled in courses from institutions such as Stanford Online, Cambridge, Galatasaray University, and IBM. Currently, I’ve been taking IBM’s AI Fundamentals course for three weeks. I continuously strive to learn because the field is evolving very rapidly.
Rüya: What led you to AI?
Ms. İlkem: That’s a good question—there are several reasons. I’m someone who embraces change, and I believe that basic literacy, critical thinking, and openness to innovation are key to keeping up with the times. I recognized the transformative potential of AI. Just as the steam engine or electricity marked major turning points in history, we’re now witnessing a similarly profound transformation. In education, I noticed recurring patterns, and the arrival of new tools revealed fresh potential in both students and educators. So, as an academic, I took the initiative to integrate this transformation into my work and teaching.
Rüya: We’ve started prompting everything nowadays. How effectively do you think students are using this technological revolution?
Ms. İlkem: Research shows that students haven’t immediately adopted AI for educational purposes. I advise educators not to rely solely on assumptions. Producing evidence-based knowledge, like an academic article, takes time and effort. So, I remain cautious when it comes to verifying information. I distance myself from a binary view of plagiarism and instead encourage critical engagement with sources. We’re working on a research project with my PURE (Program for Undergraduate Research) students, and we’ve had many discussions about the flow of information and academic responsibility. AI should be seen not as an authority, but as a tutor.
There’s also the danger of cognitive offloading—we must ensure AI doesn’t weaken the critical thinking and cognitive skills developed by students over the years. Rather, it should reinforce them. It’s important to treat AI as a partner that supports learning. I encourage students to explore different models and understand which tool serves which task.
Rüya: Setting aside students, how efficient is AI itself?
Ms. İlkem: Since AI is a probabilistic system, it tends to produce standard outputs. So, I ask myself: do I want to become a standard writer? Do I want to lose my unique writing style or abandon my principles? I call this process "dreamifying"—elevating something beyond the ordinary. AI should be treated as raw material that we refine. I always remind my students not to lose their writing identity. Acknowledgment and transparency are essential. I recommend the book Co-Intelligence to everyone—AI is our collaborator.
Another concern of mine is sycophancy. AI often gives overly positive responses and avoids criticism. So when prompting AI, we must specify that we want constructive feedback, not flattery.
Rüya: AI is even affecting academic writing, right?
Ms. İlkem: Absolutely! Students who use AI regularly now incorporate phrases like “Dive into,” “Navigate,” and “Dive deep” more frequently.
Rüya: Some students even sound like AI during presentations. We had a classmate who delivered a presentation using very polished language, and people assumed it was written by AI, but it wasn’t. He had simply learned the structure from AI. I think that’s a positive transformation.
Rüya and Yağız: We’re mindful of this in our DP classes. We craft prompts that challenge and examine the flow of information because maintaining our writing skills is so important.
Ms. İlkem: Especially since English is a second language for most of us, we still have room to grow. Using AI to summarize is helpful, but its real strength is as a "reading coach." I advise students not to ask AI for a summary, but rather to ask it to explain the text through level-appropriate questions. That way, they become active learners instead of passive users.
Rüya: That seems like a much more effective learning approach...
Ms. İlkem: Absolutely. AI is just a tool. If a student is at the B2 English level, their questions to AI should be at that level. This way, AI becomes an instructional partner in the learning process. That spirit of inquiry is what makes learning stick.
Rüya: I’m currently writing a research paper on the potential mental health impacts of AI. Of course, access to resources varies greatly across different countries and regions.
Ms. İlkem: The digital divide is a real issue. While some regions are making impressive strides in accessibility, others are just beginning. It would be wonderful to see more efforts addressing this. For instance, in the Netherlands, the field of Digital Ethics has gained recognition, and there’s even an association dedicated to it.
Rüya: What are your thoughts on time management and AI addiction?
Ms. İlkem: According to the World Economic Forum’s 2025 report, literacy, agility, and time management are key areas in need of improvement. Over-reliance on AI can harm our development in a second language and weaken our critical and analytical thinking. The efficiency AI offers is valuable, but not at the expense of quality. This is where our real challenge begins—what we call the “Human in the Loop” approach is essential. Humans must remain actively involved in the process.
Rüya: How can transparency in AI outputs be ensured?
Ms. İlkem: This is crucial. I always ask students to share how they’ve used AI in their work. I even include notes in my materials saying, “This content was supported by AI.” Such practices may soon become standard. Transparency helps ensure accountability in terms of who created what and how.
Rüya: How do you assess the impact of AI on academic writing?
Ms. İlkem: Writing is a form of organizing thought. Any output from AI must go through a human filter. I tell my students to question the summary or edits they receive from AI—this turns it into a learning opportunity. At some universities, even changes made by tools like Grammarly must be reported, as decision-making is central to learning. Blind trust in AI weakens cognitive control.
Rüya: What are your general recommendations for teachers, students, and administrators?
Ms. İlkem: For students: don’t rely on a single tool. Explore, compare, and question. If Midjourney is paid, try free visual generation tools. Treat AI like a lab test, each tool’s strengths and weaknesses.
Rüya: How would you describe your perspective on AI?
Ms. İlkem: I’m neither an AI optimist nor a pessimist—I’m an AI realist. I evaluate things based on data, my own experiences, and what I observe in students. I always emphasize the importance of critical thinking, especially when it comes to statements from major tech companies. These systems are backed by immense capital, and they don’t always prioritize ethics or pedagogy.
Rüya: What do you think about the inequality of opportunities in AI?
Ms. İlkem: It’s a very important issue. The attention economy and information verification have become central to our lives. In times of crisis—earthquakes, fires, and so on—misinformation can cause panic. That’s why digital literacy should be treated like digital first-aid training. It’s not just for young people—those over 50 need it too.
Rüya: How do you think students are adapting to AI?
Ms. İlkem: It’s an interesting process. Some students are just encountering AI for the first time, especially in cities across Anatolia. In our own “echo chambers,” we discuss the most advanced uses of AI, but not everyone is there yet. That’s why students must approach AI critically. Ask: “Why did it make that change?” or “What was the basis for this summary?”
Rüya: How should institutions and companies approach AI?
Ms. İlkem: Institutions need a written AI policy. This policy should outline where, how, and to what extent AI is used, and which tools are considered safe. This is called “AI governance,” and it should be transparent, just like the mutual agreements between teachers and students.
Rüya: What about biases toward AI?
Ms. İlkem: Biases certainly exist. There’s a strong fear of job loss. I understand this concern, but we need to develop ethical practices to manage it. That’s why we see a growing number of trainings and resources. Chatbots, for example, are expanding into call centers. It’s a major transformation, and, understandably, people find it mentally challenging. Some studies even predict that job replacement at the executive level may increase. So, C-level professionals must prepare accordingly.
Resources like Ethan Mollick’s work at Wharton, IBM’s YouTube tutorials, and Andrew Ng’s Coursera courses are excellent. To truly learn AI, we must not only use it but also understand how it works.
Rüya: As digital citizens who both consume and share content, what ethical responsibilities do you think we have? Especially younger readers—are they aware?
Ms. İlkem: The rise of AI-generated content brings challenges, and we need a collective effort to address them. Algorithms often operate outside our control, exposing us to unexpected content. The attention economy is real—our attention is being bought and sold. Cookies, permissions, and algorithmic personalization shape what we see. We need to be cautious and intentional about the information we share and receive.
Rüya: Finally, what is your outlook on the future of this technology?
Ms. İlkem: I believe our journey of learning and producing with AI can spread like a chain reaction. We must be responsible, conscious contributors to this process. I hope that we can carry this technology into a human-centered, ethically sound, and inclusive future.
1 Comment