PUC-Rio Opus Research Group
This web page presents the supplementary material of the paper On the density and diversity of degradation symptoms in refactored classes: A multi-case study.

Authors: Willian Oizumi, Leonardo Sousa, Anderson Oliveira, Luiz Carvalho, Alessandro Garcia, Thelma Colanzi, Roberto Oliveira

Following sections present the complementary information regarding the paper.

Root canal refactoring is a software development activity that is intended to improve dependability-related attributes such as modifiability and reusability. Despite being an activity that contributes to these attributes, deciding when applying root canal refactoring is far from trivial. In fact, finding which elements should be refactored is not a cut-and-dried task. One of the main reasons is the lack of consensus on which characteristics indicate the presence of structural degradation. Thus, we evaluated whether the density and diversity of multiple automatically detected symptoms can be used as consistent indicators of the need for root canal refactoring. To achieve our goal, we conducted a multi-case exploratory study involving 6 open source systems and 2 systems from our industry partners. For each system, we identified the classes that were changed through one or more root canal refactorings. After that, we compared refactored and non-refactored classes with respect to the density and diversity of degradation symptoms. We also investigated if the most recurrent combinations of symptoms in refactored classes can be used as strong indicators of structural degradation. Our results show that refactored classes usually present higher density and diversity of symptoms than non-refactored classes. However, root canal refactorings that are performed by developers in practice may not be enough for reducing degradation, since the vast majority had little to no impact on the density and diversity of symptoms. Finally, we observed that symptom combinations in refactored classes are similar to the combinations in non-refactored classes. Based on our findings, we elicited an initial set of requirements for automatically recommending root canal refactorings.

The paper is available for download in the following link: [Download]

[Paper Data] [Paper Data After Second Filter] [Raw Data Open Source] [Refactorings Open Source] [OpenPOS System] [UniNFe System]


[1] J. Offutt, “Quality attributes of web software applications,” IEEE Softw., vol. 19, no. 2, pp. 25–32, Mar. 2002. [Online]. Available: https://doi.org/10.1109/52.991329

[2] I. Gorton, Essential software architecture. Springer Science & Business Media, 2006.

[3] Z. Li, P. Avgeriou, and P. Liang, “A systematic mapping study on technical debt and its management,” J. Syst. Softw., vol. 101, no. C, pp. 193–220, Mar. 2015. [Online]. Available: http://dx.doi.org/10.1016/j.jss.2014.12.027

[4] E. Lim, N. Taksande, and C. Seaman, “A balancing act: What software practitioners have to say about technical debt,” IEEE Software, vol. 29, no. 6, pp. 22–27, Nov 2012.

[5] T. Besker, A. Martini, and J. Bosch, “Time to pay up: Technical debt from a software quality perspective,” in Proceedings of the XX Iberoamerican Conference on Software Engineering, Buenos Aires, Argentina, May 22-23, 2017., 2017, pp. 235–248.

[6] A. MacCormack, J. Rusnak, and C. Baldwin, “Exploring the structure of complex software designs: An empirical study of open source and proprietary code,” Manage. Sci., vol. 52, no. 7, pp. 1015–1030, 2006.

[7] M. C. O. Silva, M. T. Valente, and R. Terra, “Does technical debt lead to the rejection of pull requests?” in Proceedings of the 12th Brazilian Symposium on Information Systems, ser. SBSI ’16, 2016, pp. 248–254.

[8] B. Curtis, J. Sappidi, and A. Szynkarski, “Estimating the size, cost, and types of technical debt,” in Proceedings of the Third International Workshop on Managing Technical Debt, ser. MTD ’12. Piscataway, NJ, USA: IEEE Press, 2012, pp. 49–53. [Online]. Available: http://dl.acm.org/citation.cfm?id=2666036.2666045

[9] M. Fowler, Refactoring: Improving the Design of Existing Code. Boston: Addison-Wesley Professional, 1999.

[10] E. J. Chikofsky and J. H. Cross, “Reverse engineering and design recovery: a taxonomy,” IEEE Software, vol. 7, no. 1, pp. 13–17, Jan 1990.

[11] E. Murphy-Hill and A. P. Black, “An interactive ambient visualization for code smells,” in Proceedings of the 5th international symposium on Software visualization; Salt Lake City, USA. ACM, 2010, pp. 5–14.

[12] R. C. Martin and M. Martin, Agile Principles, Patterns, and Practices in C# (Robert C. Martin). Upper Saddle River, NJ, USA: Prentice Hall PTR, 2006.

[13] E. Gamma, R. Helm, R. Johnson, and J. Vlissides, Design Patterns: Elements of Reusable Object-oriented Software. Boston, MA, USA: Addison-Wesley Longman Publishing Co., Inc., 1995.

[14] M. Abbes, F. Khomh, Y. Gueheneuc, and G. Antoniol, “An empirical study of the impact of two antipatterns, blob and spaghetti code, on program comprehension,” in Proceedings of the 15th European Software Engineering Conference; Oldenburg, Germany, 2011, pp. 181–190.

[15] I. Macia, J. Garcia, D. Popescu, A. Garcia, N. Medvidovic, and A. von Staa, “Are automatically-detected code anomalies relevant to architectural modularity?: An exploratory analysis of evolving systems,” in AOSD ’12. New York, NY, USA: ACM, 2012, pp. 167–178.

[16] W. Oizumi, A. Garcia, L. Sousa, B. Cafeo, and Y. Zhao, “Code anomalies flock together: Exploring code anomaly agglomerations for locating design problems,” in The 38th International Conference on Software Engineering; USA, 2016.

[17] L. Sousa, A. Oliveira, W. Oizumi, S. Barbosa, A. Garcia, J. Lee, M. Kalinowski, R. de Mello, B. Fonseca, R. Oliveira, C. Lucena, and R. Paes, “Identifying design problems in the source code: A grounded theory,” in Proceedings of the 40th International Conference on Software Engineering, ser. ICSE ’18. New York, NY, USA: ACM, 2018, pp. 921– 931. [Online]. Available: http://doi.acm.org/10.1145/3180155.3180239

[18] S. Vidal, E. Guimaraes, W. Oizumi, A. Garcia, A. D. Pace, and C. Marcos, “Identifying architectural problems through prioritization of code smells,” in SBCARS16, Sept 2016, pp. 41–50.

[19] L. Sousa, R. Oliveira, A. Garcia, J. Lee, T. Conte, W. Oizumi, R. de Mello, A. Lopes, N. Valentim, E. Oliveira, and C. Lucena, “How do software developers identify design problems?: A qualitative analysis,” in Proceedings of 31st Brazilian Symposium on Software Engineering, ser. SBES’17, 2017.

[20] I. O. for Standardization, ISO-IEC 25010: 2011 Systems and Software Engineering-Systems and Software Quality Requirements and Evaluation (SQuaRE)-System and Software Quality Models. ISO, 2011.

[21] A. Yamashita, M. Zanoni, F. A. Fontana, and B. Walter, “Inter-smell relations in industrial and open source systems: A replication and comparative analysis,” in Software Maintenance and Evolution (ICSME), 2015 IEEE International Conference on, Sept 2015, pp. 121–130.

[22] I. Macia, R. Arcoverde, E. Cirilo, A. Garcia, and A. von Staa, “Supporting the identification of architecturally-relevant code anomalies,” in ICSM12, Sept 2012, pp. 662–665.

[23] W. Oizumi, L. Sousa, A. Oliveira, A. Garcia, A. B. Agbachi, R. Oliveira, and C. Lucena, “On the identification of design problems in stinky code: experiences and tool support,” Journal of the Brazilian Computer Society, vol. 24, no. 1, p. 13, Oct 2018. [Online]. Available: https://doi.org/10.1186/s13173-018-0078-y

[24] M. Lanza and R. Marinescu, Object-Oriented Metrics in Practice. Heidelberg: Springer, 2006.

[25] A. Yamashita and L. Moonen, “Do code smells reflect important maintainability aspects?” in ICSM12, 2012, pp. 306–315.

[26] N. Moha, Y. Gueheneuc, L. Duchien, and A. L. Meur, “Decor: A method for the specification and detection of code and design smells,” IEEE Transaction on Software Engineering, vol. 36, pp. 20–36, 2010.

[27] S. R. Chidamber and C. F. Kemerer, “A metrics suite for object oriented design,” IEEE Transactions on Software Engineering, vol. 20, no. 6, pp. 476–493, June 1994.

[28] T. J. McCabe, “A complexity measure,” IEEE Transactions on software Engineering, no. 4, pp. 308–320, 1976.

[29] E. Murphy-Hill and A. P. Black, “Seven habits of a highly effective smell detector,” in Proceedings of the 2008 International Workshop on Recommendation Systems for Software Engineering, ser. RSSE ’08. New York, NY, USA: ACM, 2008, pp. 36–40.

[30] D. M. Le, D. Link, A. Shahbazian, and N. Medvidovic, “An empirical study of architectural decay in open-source software,” in 2018 IEEE International Conference on Software Architecture (ICSA), April 2018, pp. 176–17 609.

[31] L. Xiao, Y. Cai, R. Kazman, R. Mo, and Q. Feng, “Identifying and quantifying architectural debt,” in Proceedings of the 38th International Conference on Software Engineering, ser. ICSE ’16. New York, NY, USA: ACM, 2016, pp. 488–498. [Online]. Available: http://doi.acm.org/10.1145/2884781.2884822

[32] Microsoft. (2019, February) Visual studio 2017 version 15.9 release notes. Available at https://docs.microsoft.com/pt-br/visualstudio/releasenotes/vs2017-relnotes.

[33] T. Sharma, P. Mishra, and R. Tiwari, “Designite: A software design quality assessment tool,” in Proceedings of the 1st International Workshop on Bringing Architectural Design Thinking into Developers’ Daily Activities, ser. BRIDGE ’16. New York, NY, USA: ACM, 2016, pp. 1–4. [Online]. Available: http://doi.acm.org/10.1145/2896935. 2896938

[34] T. R. Foundation. (2019, February) The r project for statistical computing. Available at https://www.r-project.org/.

[35] R. Mo, Y. Cai, R. Kazman, and L. Xiao, “Hotspot patterns: The formal definition and automatic detection of architecture smells,” in Software Architecture (WICSA), 2015 12th Working IEEE/IFIP Conference on, May 2015, pp. 51–60.

[36] I. Macia, R. Arcoverde, A. Garcia, C. Chavez, and A. von Staa, “On the relevance of code anomalies for identifying architecture degradation symptoms,” in CSMR12, March 2012, pp. 277–286.

[37] W. Oizumi, A. Garcia, T. Colanzi, A. Staa, and M. Ferreira, “On the relationship of code-anomaly agglomerations and architectural problems,” Journal of Software Engineering Research and Development, vol. 3, no. 1, pp. 1–22, 2015.

[38] M. Alenezi and M. Zarour, “An empirical study of bad smells during software evolution using designite tool,” i-Manager’s Journal on Software Engineering, vol. 12, no. 4, pp. 12–27, Apr 2018.

[39] G. Bavota, A. D. Lucia, M. D. Penta, R. Oliveto, and F. Palomba, “An experimental investigation on the innate relationship between quality and refactoring,” Journal of Systems and Software, vol. 107, pp. 1 – 14, 2015. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0164121215001053

[40] A. Chavez, I. Ferreira, E. Fernandes, D. Cedrim, and A. Garcia, “How does refactoring affect internal quality attributes?: A multi-project study,” in Proceedings of the 31st Brazilian Symposium on Software Engineering, ser. SBES’17. New York, NY, USA: ACM, 2017, pp. 74–

83. [Online]. Available: http://doi.acm.org/10.1145/3131151.3131171

[41] D. Cedrim, A. Garcia, M. Mongiovi, R. Gheyi, L. Sousa, R. de Mello, B. Fonseca, M. Ribeiro, and A. Chavez, “Understanding the impact of refactoring on smells: A longitudinal study of 23 software projects,” in Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering, ser. ESEC/FSE 2017. New York, NY, USA: ACM, 2017, pp. 465–475. [Online]. Available: http://doi.acm.org/10.1145/3106237.3106259