Premium
A Simple, Efficient, Context‐sensitive Approach for Code Completion
Author(s) -
Asaduzzaman Muhammad,
Roy Chanchal K.,
Schneider Kevin A.,
Hou Daqing
Publication year - 2016
Publication title -
journal of software: evolution and process
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.371
H-Index - 29
eISSN - 2047-7481
pISSN - 2047-7473
DOI - 10.1002/smr.1791
Subject(s) - computer science , code (set theory) , context (archaeology) , field (mathematics) , security token , source code , artificial intelligence , set (abstract data type) , machine learning , natural language processing , programming language , computer security , paleontology , biology , mathematics , pure mathematics
Code completion helps developers use application programming interfaces (APIs) and frees them from remembering every detail. In this paper, we first describe a novel technique called Context‐sensitive Code Completion (CSCC) for improving the performance of API method call completion. CSCC is context sensitive in that it uses new sources of information as the context of a target method call. CSCC indexes method calls in code examples by their context. To recommend completion proposals, CSCC ranks candidate methods by the similarities between their contexts and the context of the target call. Evaluation using a set of subject systems and five popular state‐of‐the‐art techniques suggests that CSCC performs better than existing type or example‐based code completion systems. We conduct experiments to find how different contextual elements of the target call benefit CSCC. Next, we investigate the adaptability of the technique to support another form of code completion, i.e., field completion. Evaluation with eight different subject systems suggests that CSCC can easily support field completion with high accuracy. Finally, we compare CSCC with four popular statistical language models that support code completion. Results of statistical tests from our study suggest that CSCC not only outperforms those techniques that are based on token level language models, but also in most cases performs better or equally well with GraLan, the state‐of‐the‐art graph‐based language model. Copyright © 2016 John Wiley & Sons, Ltd.