In addition to my work as a singer-songwriter and educator, I operate as an embedded practitioner studying how AI systems function in sustained creative practice.
Rather than approaching AI from a purely technical or theoretical standpoint, my work focuses on how creative professionals integrate these tools into real-world workflows over time.
This research documents what becomes possible—and what breaks—when AI is used continuously rather than in isolated sessions.
As a working artist, I engage AI as part of an ongoing creative process, allowing long-horizon use to surface insights that are often invisible in short-term or experimental settings.
The materials below explore this embedded, practice-based approach and its implications for creative work, tool design, and future human–AI collaboration.
Organizations and collaborators may engage this work through:
• Artist-in-residence roles
• Embedded creative research
• Long-horizon workflow studies
• Documentation of sustained AI use in real environments
• Translation between creative and technical teams
This case study documents the completion of the single Already Know as an example of long-horizon human–AI collaboration in creative practice. Rather than generating the song’s content, AI support helped resolve a long-standing lyrical bottleneck and sustain momentum through recording and release.
The study argues that AI’s most meaningful impact in creative work may lie not in producing novel outputs, but in increasing the likelihood that unfinished work reaches completion. In this case, AI functioned as a continuity support system—helping stabilize forward motion across time while preserving human authorship.
This case suggests that under sustained interaction, AI may shift from a generative role to a stabilizing one. In episodic use, systems are typically evaluated based on their ability to produce novel content in response to discrete prompts. In the completion of Already Know, however, the system’s primary contribution was not the introduction of new material but the preservation of forward motion across time. Its function evolved from phrasing exploration during the writing phase to continuity support during execution and production. This indicates a distinct interaction mode in which alignment is expressed not through output quality alone but through the system’s capacity to maintain coherence with an existing human intention across interruptions, delays, and changing constraints. Such stabilizing behavior is unlikely to appear in short-horizon evaluations and suggests that long-term collaboration may reveal forms of alignment support that are currently underexamined in prompt-based assessments.
Keywords: Human-AI Collaboration (HAC), Long-Horizon Alignment, Creative Stewardship, Continuity Support Systems, Qualitative Evaluation.
Technical Case Study: Creative Survivability & AI Continuity (pdf)
DownloadArtificial intelligence systems are increasingly used in creative and research workflows that unfold over weeks, months, or years. Yet most AI tools are designed for short interactions and frequently reset context. This creates an invisible burden for users who must continually reconstruct meaning, decisions, and process history.
I refer to the missing layer that supports sustained collaboration as continuity infrastructure.
Over the past year, I have been documenting a real-world case study of long-horizon human–AI collaboration through my own creative practice as an independent musician and researcher. Rather than isolated prompts, my interaction with ChatGPT developed into an evolving workflow involving songwriting, production planning, research writing, and reflective documentation.
This process revealed several recurring patterns.
First, users often become continuity stewards, maintaining the thread of a project across model resets, new sessions, and shifting tools. This stewardship includes summarizing prior work, re-establishing context, and preserving decisions made earlier in the process.
Second, extended collaboration generates interpretive labor. The human participant frequently acts as a translator between their evolving goals and the model’s capabilities, shaping prompts, reframing ideas, and stabilizing shared terminology.
Third, meaningful collaboration leaves behind artifact trails. In my case these include released songs, research papers, process notes, and structured documentation of the collaboration itself. These artifacts provide a traceable record of how human–AI creative practice develops over time.
Taken together, these observations suggest that continuity is not simply a feature of conversation logs. It is a structural requirement for long-term human–AI collaboration.
If AI systems are expected to support real creative or research work, continuity must become part of system design rather than an accidental byproduct of chat history.
My ongoing work documents this process from the perspective of a practitioner actively using AI in a long-horizon creative workflow. It offers both a field case study and an initial framework for understanding the human role in sustaining continuity across AI interactions.
This work emerges from a broader practice combining:
The goal is to better understand how AI systems function within real creative processes over time.
I am documenting this work as an independent creative researcher and musician. I welcome conversations with research teams studying long-horizon human–AI collaboration.
About this paper
Creators and educators are increasingly using conversational AI for work that unfolds across months—songwriting and release cycles, curriculum design, student planning, and ongoing coordination. This working paper reframes the core challenge of that use as continuity: a governed, load-bearing system property that determines whether an AI system can function as a stable collaborator across time, updates, and shifting constraints. Grounded in longitudinal practice and qualitative synthesis, it introduces reset costs and interpretive debt as vocabulary for the hidden coordination labor users absorb when systems lose context or change behavior, and it outlines design and governance requirements for continuity-aware collaboration.
Status: Working paper (v1.0, January 2026)
Extended use of conversational AI systems is often framed as a psychological anomaly or low-stakes novelty rather than legitimate professional collaboration (Nass & Moon, 2000; Waytz et al., 2014). Yet creators and educators increasingly rely on these systems for projects unfolding across months—songwriting and release cycles, curriculum design, student planning, business coordination, and sustained inquiry. This paper argues that the central challenge facing such use is infrastructural, not cultural. Drawing on traditions in human–computer interaction and infrastructure studies that emphasize breakdown, repair, and cumulative coordination work (Suchman, 1987; Star & Ruhleder, 1996; Orlikowski, 2000), it reframes continuity as a load-bearing system property—distinct from memory—that determines whether conversational systems can function as stable collaborators across time, updates, and shifting constraints.
We define continuity as the combination of stable project framing, stable interaction contracts, and legible transitions when system behavior changes. Building on qualitative synthesis and reflexive longitudinal observation, we introduce two analytic constructs—reset costs and interpretive debt—to describe the hidden labor users perform when systems lose context or shift behavior across versions and policy regimes, extending prior work on sociotechnical maintenance and technical debt (Jackson, 2014; Cunningham, 1992). We further conceptualize trust as an operational variable shaped by predictability and constraint stability rather than sentiment (Lee & See, 2004; Parasuraman & Riley, 1997), and analyze how dominant evaluation and governance practices—optimized for short-horizon prompts and interchangeable sessions—systematically suppress longitudinal signal (Mitchell et al., 2019; Raji et al., 2020; NIST, 2023).
The paper concludes with design and governance requirements for continuity-aware systems, including versioned collaboration regimes, discontinuity signaling, consented persistence with revocability, and longitudinal evaluation protocols. Taken together, the analysis positions continuity not as an indulgence for “heavy users,” but as a prerequisite for sustainable, accountable long-horizon human–AI collaboration.
Keywords: Human-Computer Interaction (HCI), Sociotechnical Infrastructure, Interpretive Debt, Reset Costs, Long-Horizon Collaboration, Systems Maintenance, Technical Debt.
Continuity as Infrastructure: Load-Bearing Design in Long-Horizon Human–AI Collaboration (pdf)
DownloadAs AI systems move from analyzing the world to acting within it, the question is no longer just what they optimize — but what they help endure.
This paper explores continuity as the difference between intelligence that extracts from the future and intelligence that sustains it, introducing continuity stewardship as a design approach for systems operating across time.
Keywords: Long-Horizon AI Alignment, System Stewardship, Continuity Design, AI Sustainability, Persistence Architectures, Temporal Governance, Human-AI Co-evolution.
From Optimization to Stewardship: Continuity and the Future of AI (pdf)
DownloadAbout this paper
Examines interpretive labor in long-horizon AI systems and proposes governance mechanisms for formalizing translator roles.
Status: Working paper (v1.0, February 2026)
As AI systems increasingly persist across months and years of use, governance challenges shift from discrete failure modes toward slow-moving sociotechnical dynamics—creeping reliance, authority normalization, evolving trust relationships, and identity-shaping workflows. Contemporary deployment pipelines emphasize telemetry, benchmarks, and short-horizon audits, yet many of these effects remain structurally invisible.
In practice, organizations already depend on a small subset of highly engaged users to surface emergent risks, translate system changes into lived consequences, and articulate governance gaps before they appear at scale. These users function as an informal interpretive layer in deployment—one that is structurally relied upon but rarely designed, compensated, or audited.
This paper introduces translator trust as a governance construct for long-horizon AI systems: institutional pathways that authorize, resource, and bound human interpretive labor required to make slow-moving deployment dynamics legible. We argue that this interpretive labor constitutes an infrastructural dependency and should be institutionalized rather than left ad hoc. Drawing on extended creative deployments and emerging agentic architectures, we analyze how translator roles arise, why informal reliance produces governance vulnerabilities, and how programs can be designed for pluralism, rotation, auditability, and independence protections. We conclude with implications for research labs, product teams, and regulators.
Keywords: AI Governance, Sociotechnical Evaluation, Long-Horizon Deployment, Post-Deployment Monitoring, Interpretive Labor, Algorithmic Auditing, Red Teaming (Human-in-the-loop).
Translator Trust: Governing Interpretive Labor in Long-Horizon AI Systems (pdf)
DownloadThis repository contains longitudinal case studies (2025–2026) regarding human-AI collaboration in music production. Research focus: Continuity Stewardship, Reset Costs in model transitions, and Interpretive Labor. Dataset includes 14+ months of interaction logs documenting project recovery and long-horizon creative alignment.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.