Conferences in Software Engineering: Reflections after 30 Years

ACM SIGSOFT Blog
8 min readFeb 7, 2024

--

By Lionel Briand, University of Ottawa, Canada; and Lero, the SFI National Research Centre for Software hosted by the University of Limerick, Ireland

The Good, The Bad, and The Ugly

Preamble:

What I am about to share mostly concerns general software engineering conferences, such as ICSE, FSE, and ASE. Though I was told by colleagues that some of my points are valid in other communities, I am not trying to generalize beyond this scope, as I base my arguments on personal experience.

Note that, among others, I have co-authored 26 papers in the technical track of the International Conference on Software Engineering (ICSE) over the years, including two Most Influential Paper (MIP) awards, though ICSE has never been my focus. I have also submitted many papers that were rejected. My first ICSE paper was presented in 1993 in Baltimore, and I was the ICSE PC co-chair in 2014 in India! I therefore have a substantial experience with SE conferences, in addition to what was shared with me by many colleagues, on which to reflect.

I have frequently expressed my views on this topic, but I never articulated them in one place, to a sufficient level of detail. I will, however, make the best attempt to remain concise as I presume nobody wants to read an extensive essay on such a topic.

Referring to one of my favorite movies, I will talk about the “The Good, The Bad, and The Ugly” (Sergio Leone, 1966) regarding conferences and then what we can do about it.

The Good:

Good conferences are truly enjoyable events and a unique occasion to meet your peers, many of them becoming friends over the years, exchange ideas and arguments face to face, establish collaborations, or simply enjoying the company of mostly friendly and talented people. Many exciting workshops, tutorials, and specialized co-located conferences take place at major general conferences like ICSE. Further, in recent years, interesting tracks were established such as the vision, industry, society, and journal-first tracks. General conferences therefore play a central role in building our research community, most particularly for newcomers and younger researchers. I therefore expect many of you to attend ICSE 2025 in Ottawa, Canada! ;-)

The Bad:

Early in the history of computer science and then software engineering, there were no research journals like in established sciences. Our community, starting in the 1970s, quickly established conferences that became central publication venues. The first edition of ICSE, for example, took place in 1975 in Washington, DC, USA (and was called NCSE — National Conference on Software Engineering). That conference tradition stayed with us even as our research community matured and evolved. Selective and high-quality journals were established along the way, such as IEEE TSE (1975), ACM TOSEM (1992), and Springer Empirical Software Engineering (1996). However, as opposed to other sciences and engineering disciplines, conferences such as ICSE nevertheless remained important publication outlets.

A priori, there is nothing wrong with that, and we don’t necessarily have to resemble other disciplines in our publication practices. Except that, IMO, there are good reasons why these disciplines rely on scientific journals for thorough reviewing and archival publications.

First, let me address the conference review process. The way it works is that a large number of papers are usually submitted, in a field that has now become extremely diverse (see my ICSE 2022 keynote), requiring a very large and diverse program committee. The review load must be balanced across PC members, who are required to review between 15 and 20 papers, sometimes over two rounds. (To our credit, the process keeps changing in our constant and perhaps desperate attempt to improve it.) Since PC members tend to be busy people, such an endeavor is usually rather overwhelming, for most people. Reviewers are then under significant pressure and often resort to the common strategy of looking for reasons to reject papers, despite the injunctions of PC chairs. Further, the forming of the PC and the assignment of reviews cannot be easily optimized and as a result, most submissions are assigned at most to two experts, and in many cases no more than one. Last, given the space constraints conference authors must comply with and the resulting high density of the material presented, finding relevant technical information and thus addressing one’s doubts as a reviewer can be a challenge.

The result is that, though reviews tend to be well written and typically appear smart (academics are good at that), my experience is that they often contain mistakes or are not thorough and precise, especially when compared to what one gets from top journals. To many authors, though part of the problem may lie with the way they reported their work, the reviews they get seem largely arbitrary. As a result, authors find it difficult to improve the papers significantly based on the reviews. This is in contrast with top journals where, based on my experience, the final manuscript, when accepted, is usually far better than the initial one.

One main goal of the review process is for researchers and students to engage in a dialog with their peers, a structured and moderated exchange. This is an essential process in any research community since, contrary to intuition, reviewing is, to a significant extent, a subjective social process, at least in a field like software engineering. Unfortunately, though rebuttal processes are sometimes employed (with varying success) to mitigate this problem, that dialog is either non-existent or limited with conferences. Recent attempts to introduce a review process that allows for major revisions, similar to journals, have been met with limited success, for reasons probably related to the challenges mentioned above.

The above issues are particularly painful for students and junior researchers, especially those who are in an academic system requiring publishing in general SE conferences. Some of them have sufficient self-confidence to handle it but others, typically younger students with limited experience, are deeply affected and may question their abilities and motivation to further engage in research. Of course, as supervisors, part of our role is to guide students through this process, but we can only provide responses that are partially satisfactory.

The process above also leads to many resubmissions to different conferences, handled by different reviewers, with often varying opinions, and results globally in a great deal of wasted reviewing effort. This is in contrast to top journals, where authors are provided with clear requirements for eventual acceptance and are invited to satisfy them, usually with significant effort.

The journal review process, if well managed, addresses many of the problems mentioned above thanks to mostly expert reviewers and proper moderation by a knowledgeable associate editor. What I find particularly puzzling is our persistence in trying to fix conferences by bringing them closer to journals, a futile attempt in my opinion, while we do have several journals of high quality and others that could easily be propelled to higher standards.

Last, I want to make it clear that I don’t blame the conference PC members or chairs for the above mentioned problems, as they usually show strong dedication to their task. This is a systemic problem.

The Ugly:

What are the indirect consequences of the issues I raised above?

Of course, first, a functional and efficient review process is essential for the cohesion and attractiveness of a research community. But further than that, what this leads to in practice, is that getting paper accepted in a high profile, general conference has largely become an exercise of style. A difficult one, almost an art form, but an exercise of style, nevertheless. Papers don’t need to address important or hard problems, they don’t need to provide the beginning of a plausible solution, they simply need to be written in such a way that they are extremely difficult to reject without significant effort. I should note, however, and this is part of the exercise, that most of these accepted papers are often beautifully written. Of course, some papers are accepted that don’t fit this pattern, but their probability of acceptance is much lower.

This has significant consequences on our field. Innovative contributions on hard and important problems are unlikely to make it and are therefore discouraged. These, by definition, are easy to reject. Typically, the case study may be too small, the evidence may be weak because some situations are not addressed (partial solutions), and not all doubts can be easily cleared. Also note that, the technical track implicitly discourages any paper addressing industrial problems, which are hard to report on within space constraints without a high risk of rejection (hence the creation of “software engineering in practice” tracks, but these are often viewed as second class citizens). This problem is compounded by futile and harmful ranking exercises such as CSRanking, relying exclusively on an arbitrary subset of conferences, including the three major ones, thus further pressuring many SE academics to focus on conferences.

Another interesting personal experience I have had over the years is that I have rarely been able to successfully use (e.g., as a baseline for research) or recommend a solution to industry partners from a general conference paper such as ICSE. More often than not, there are hidden and often unrealistic assumptions, results that cannot be reproduced or that are only valid on the selected benchmarks, scalability issues, or the problem is not defined in a meaningful way, from a practical standpoint. This is usually difficult to notice when reviewing a paper for a couple of hours, especially, as noted above, because such papers are often written with elegance and talent.

The Future:

Though conferences are extremely important, we should seriously consider whether using their technical tracks as pseudo journal substitutes — a publication model that we all got used or even addicted to — is beneficial to our community or whether we should take inspiration for more mature fields, for example in engineering or natural sciences. That is utilizing conferences primarily for meetings, networking, community building, establishing collaborations, and reporting early ideas, and focus on journals for archival publication.

One significant improvement in recent times has been the creation of the Journal-First track. If well managed, this is a mechanism to allow those who wish to focus on journal publication to have the opportunity to present and discuss their work at conferences, and therefore remain in touch with their community. However, there are forces at play attempting to limit such tracks — by minimizing presentation times or numbers — as they claim this diminishes the visibility of papers in the technical track. Needless to say, I find this argument ridiculous, without elaborating further, and will adamantly oppose such attempts to weaken the Journal-First track.

Personally, from now on, I will focus my publication and reviewing efforts on top journals. I love general SE conferences, but getting meaningful feedback from my peers is a priority, not only for me but also my students. And based on feedback I have received, I am apparently not the only one …

I have accepted to be the co-GC of ICSE 2025, along with my colleague Tim Lethbridge, because I believe in the positive and central role of this conference, as described above, but not as a substitute for journal archival publication.

Find more information about the author at www.lbriand.info.

Disclaimer: The posts in the SIGSOFT Blog are written by individual contributors and any views or opinions represented in their posts are personal, belong solely to the blog authors, and do not necessarily represent those of ACM SIGSOFT or ACM.

--

--

ACM SIGSOFT Blog

SIGSOFT is the ACM Special Interest Group on Software Engineering