Talks and presentations

By AI: Authorship, Literature, and Large Language Models

May 03, 2024

Presentation, Cornell University, A.D. White House, Ithaca, NY

Imani presents her undergraduate honors thesis as part of the Spring Humanities Research Conference hosted by the Humanities Scholars Program at Cornell University. This thesis was awarded Summa Cum Luade by the Department of Literatures in English at Cornell.

Literature in the Age of Mechanical Reproduction

February 16, 2024

Presentation, Cornell University, Department of Comparative Literature, Ithaca, NY

As part of the Theory Colloquiuim series hosted by the Department of Comparative Literature at Cornell University, Imani was invited to speak at one colloquia whose theme surrounded translation. Her presentation on a chapter of her undergraduate honors thesis, “By AI: Authorship, Literature, and Large Language Models,” examined how literature translates the self and human experience and then posing question of if and should we expect generative AI (ChatGPT, in particular) to do the same.

RealSketch: Language-Based Material Manipulation for Transforming Sketches into Images

August 05, 2022

Poster, Brown University Summer Research Symposium, Providence, RI

Image translation, that is, to alter the style and content of a given image to match predefined objectives, is a novel technique for artists to achieve their artistic vision. Recent works in image-to-image translation introduce methods to generate photorealistic imagery from non-realistic domains (e.g. drawings, paintings, etc.). However, these models do not allow the user to select specific region(s) for transformation and control the style of generation through text while simulating a photorealistic style. In this project, we present a language-based image-to-image translation model that allows the user to perform object-level edits via semantic query texts. This model takes a sketched image, an instance segmentation mask of the various objects in the sketch, and their corresponding text descriptors as input to translate a sketched image into the photo-realistic domain through texture generation. We adapt existing image-to-image translation architecture along with a pre-trained text-image embedding model to encode text embeddings within an instance segmentation mask for controlled regional material appearance editing. Our method allows users to edit the object appearance, generating diverse outputs given the same input image. Our work automates architectural and product visualization by allowing users to control the modes in which the sketches and designs are presented in the photorealistic domain.