Akshay K Nayak
Researcher, Engineer
Akshay K Nayak
CV

|

Scholar

About

Akshay is a researcher and engineer based in Norfolk, Virginia, currently pursuing a Ph.D. at Old Dominion University in the Accessible Computing Lab, advised by Dr. Vikas Ashok. His interdisciplinary work bridges Human-Centered AI, Accessibility, Usability, Eye Tracking, and Social Computing, with a focus on developing intelligent solutions to improve the usability and accessibility of digital technologies. He has conducted research across various domains, including data visualizations, e-commerce platforms, user-generated content (such as discussion forums and reviews), web archives, and social computing systems. His most recent research focuses on adapting technologies originally developed in affluent settings to support reliable information access for resource-constrained populations, and on designing eye-tracking-based solutions to enhance the usability of dynamic digital content for individuals with low vision. His work has been published in top-tier HCI venues such as CHI, CSCW, SIGCSE TS, IEEE VIS (TVCG), IJHCI, ASSETS, ICMI, and EICS (PACMHCI).

Akshay earned his bachelor's degree from Visvesvaraya Technological University (VTU). Before beginning his doctoral studies, he worked as a Research Assistant at the HandsOn Lab and as a Software Engineer at the London Stock Exchange Group (LSEG) and BetaNXT.

I am actively seeking internship opportunities for 2026.

News

Selected Publications

Contextual Scaffolding and Self-Efficacy: Supporting Computer Skill Development among Blind Learners in India

ACM

ACM CHI '26 β€’ April 2026

Akshay Kolgar Nayak, Yash Prakash, Sampath Jayarathna, Hae-Na Lee, Vikas Ashok

Inclusive computer literacy initiatives for blind or visually impaired (BVI) learners are growing, but research largely reflects well-resourced Global North settings. To understand challenges in resource-constrained, multicultural contexts like India, we conducted a four-month contextual inquiry at two training centers serving 94 BVI students. We found rigid, experience-driven instruction and a visually centered curriculum that overlooks BVI learners’ lived experiences and weakens self-efficacy. We argue for culturally responsive computing pedagogy supported by locally adaptable scaffolds for BVI students in developing societies.

Image for Contextual Scaffolding and Self-Efficacy: Supporting Computer Skill Development among Blind Learners in India
πŸ”
Image for Contextual Scaffolding and Self-Efficacy: Supporting Computer Skill Development among Blind Learners in India

Insights in Adaptation: Examining Self-reflection Strategies of Job Seekers with Visual Impairments in India

ACM

ACM CSCW β€’ October 2025

Akshay Kolgar Nayak, Yash Prakash, Sampath Jayarathna, Hae-Na Lee, Vikas Ashok

We present a study on self-reflection strategies among blind and visually impaired (BVI) job seekers in India. Despite gaining digital skills, many face challenges aligning with industry expectations due to limited personalized feedback and inaccessible job-prep tools. Self-reflection is often a social process shaped by peer interactions, yet current systems lack the tailored support needed for effective growth. Our findings inform the design of future tools to better guide reflective job-seeking and address the unique needs of BVI individuals in the Global South.

ACM DL
Image for Insights in Adaptation: Examining Self-reflection Strategies of Job Seekers with Visual Impairments in India
πŸ”
Image for Insights in Adaptation: Examining Self-reflection Strategies of Job Seekers with Visual Impairments in India
πŸ†

Best Paper Award

Adapting Online Customer Reviews for Blind Users: A Case Study of Restaurant Reviews

ACM

ACM Web4All β€’ April 2025

Mohan Sunkara, Akshay Kolgar Nayak, Sandeep Kalari, Yash Prakash, Sampath Jayarathna, Hae-Na Lee, Vikas Ashok

We present QuickCue, an assistive browser extension that improves the usability of online restaurant reviews for blind screen reader users. QuickCue restructures review content into a hierarchical format organized by aspects (e.g., food, service, ambiance) and sentiment (positive/negative), enabling faster, more focused exploration with minimal navigation. Powered by GPT-4, it performs aspect-sentiment classification and generates targeted summaries, significantly reducing listening fatigue and helping users make more informed decisions.

ACM DL
Image for Adapting Online Customer Reviews for Blind Users: A Case Study of Restaurant Reviews
πŸ”
Image for Adapting Online Customer Reviews for Blind Users: A Case Study of Restaurant Reviews

Towards Enhancing Low Vision Usability of Data Charts on Smartphones

IEEE

IEEE VIS (TVCG) β€’ September 2024

Yash Prakash, Pathan Aseef Khan, Akshay Kolgar Nayak, Sampath Jayarathna, Hae-Na Lee, Vikas Ashok

We present GraphLite, a mobile assistive system that makes data charts more usable for low-vision screen magnifier users. GraphLite transforms static, non-interactive charts into customizable, interactive views that preserve visual context under magnification. Users can selectively focus on key data points, personalize chart appearance, and reduce panning effort through simplified gestures.

Image for Towards Enhancing Low Vision Usability of Data Charts on Smartphones
πŸ”
Image for Towards Enhancing Low Vision Usability of Data Charts on Smartphones
View All Publications β†’