Publication Details
Abstract
This paper explores how machine translation (MT) technologies handle issues of bias, neutrality, and gender representation, drawing attention to the ethical and linguistic implications of automated language processing. Although MT systems such as Google Translate and DeepL have improved dramatically, they often reproduce or amplify gender stereotypes present in their training data. Using examples from gendered languages such as Spanish, German, and Uzbek, the study demonstrates how MT outputs frequently default to masculine forms, erasing female or neutral identities. The analysis considers both linguistic and sociopolitical dimensions, questioning whether neutrality in MT is achievable or desirable. Findings indicate that while technical adjustments—such as gender tags and inclusive design—can mitigate bias, deeper challenges remain embedded in cultural and ideological assumptions underlying training corpora. The paper concludes that machine translation cannot be treated as a neutral medium; instead, it must be critically evaluated as a cultural product with ethical consequences. Advocating for transparency, accountability, and human oversight, this study positions MT as a site where technology, language, and social justice intersect.