<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Prompting on Rostyslav Ivanitsa</title><link>https://irostyslav.github.io/tags/prompting/</link><description>Recent content in Prompting on Rostyslav Ivanitsa</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Tue, 05 May 2026 20:46:59 -0700</lastBuildDate><atom:link href="https://irostyslav.github.io/tags/prompting/index.xml" rel="self" type="application/rss+xml"/><item><title>PMs versus AI Agents</title><link>https://irostyslav.github.io/posts/pm-ai-agents/</link><pubDate>Sat, 05 Jul 2025 00:00:00 +0000</pubDate><guid>https://irostyslav.github.io/posts/pm-ai-agents/</guid><description>Generative AI in product teams is limited by data access and isolation. The next wave is AI agents deeply integrated into core systems, acting as proactive, autonomous, goal-oriented, context-aware, and action-oriented digital teammates. This will transform product management by automating mundane tasks, freeing PMs to focus on true strategy, discovery, and leadership, amplifying their impact.</description></item><item><title>Prompt Engineering: Dead?</title><link>https://irostyslav.github.io/posts/dead-prompting/</link><pubDate>Sat, 05 Jul 2025 00:00:00 +0000</pubDate><guid>https://irostyslav.github.io/posts/dead-prompting/</guid><description>Manual prompt engineering is inefficient and dead. The future involves automated prompt optimization using a three-part system: a core LLM application, an LLM-as-a-judge evaluator to measure performance, and an auto-improving agent that researches, generates, and refines prompts. This approach has shown significant performance gains, shifting the focus from manual prompt tweaking to building robust evaluators and agentic systems for continuous, automated LLM improvement</description></item></channel></rss>