Últimos posts
Tema - Critical Thinking
Tema - Investigation

Your Research Isn’t for Real (Unless You Publish)

Following the recent appearance of the new VAD magazine (an organ of the well-known Veredes website and a publication in which I have had the honour to be involved), and considering the current complex state of affairs in architectural research, one might wonder whether it makes any sense to launch yet another magazine (an evidently rhetorical question given my contribution to its first issue of VAD).

The architecture publications that have appeared – either under the auspices of the academic establishment or not – over the last ten years have apparently had but one exclusive, ineffable objective: that of disseminating research.  But are they fulfilling that objective? And, more importantly, how do we gauge whether they are or not?

In the different debates taking place on the quality of Spanish research, there’s often no alternative but to adopt criteria used in other countries. There, the most important thing is the number of times a paper written by a given author, usually a professor or a researcher working on a doctoral thesis, is cited in other publications in the same field. Highly detailed statistics about the number of such citations are offered by different institutions which have built up a whole business based on such measurements and rankings. And of course, priority consideration is given to works in English, the lingua franca of the world market for these systems.

In terms of methodology, and at first sight, nothing seems to be amiss here. The number of citations seems to be an efficient criterium: the more citations an article has, the more impact it seems to have had.  However, like all parameters involving numeric measurements, this system has its limits. For example, what about an article that’s been cited lots of times but always to contradict and criticise its content? For Spain’s teaching quality agencies, that isn’t an issue.

But perhaps it is, albeit partly. Some texts are obviously contentious from all angles. Take, for example, some of Patrik Schumacher’s work, with its defence of ethically questionable measures and arguments for abolishing social housing or privatising streets, parks and gardens. Papers of this type may well be found to have a number of citations higher than a major study by someone like Sarah Whiting, the current dean at Harvard. If the criteria used are strictly quantitative, then Schumacher should logically be appointed to a university chair despite the silly things he says. After all, his work has been cited many times…

But there you have it. Quantitative criteria are what they are. They form part of a system which, in experimental sciences like mathematics, physics and the medical disciplines, must even be acknowledged for its validity. In architecture and in the humanities, however, other criteria should be considered. We mustn’t lose sight of the fact that in true dissemination of research results, the important thing is not only the number of mentions counted in other, demonstrably scientific, publications, but also the impact those publications have in other circles.  Consider, for example, the paradoxical situation of an article that’s cited and has a great impact on the social networks but receives almost no attention in other forums.

Doubts like this should shake up the rigid system we’re tied to at present. Because at the end of the day, there can be no true research without that long sought-after dissemination of results. To make matters worse, research journals with higher impact factors have been overshadowed by the shameful commercialisation of articles, with writers and readers being charged to submit or read them, and sometimes even cases of academic cronyism.  And that’s not to mention predatory journals.

In view of all this, it’s my hope that the whole universe of publications and impact factors as we know it today will at some time collapse, or at least undergo a transformation.  There are several reasons. The first is that research cannot be measured as a number of citations collected by different American agencies which in turn charge universities for accessing the ranking lists they themselves draw up. Secondly, because in architecture what’s been built is often more influential than what’s been written. That’s a proven fact, historically.

I don’t have a magic, readymade, foolproof solution to this chaos. But, among other equally important, things, true excellence in research means having more free, open channels of communication, better trained researchers, better funding and better team creation. (And the role played by new magazines like VAD, and by all the others that may emerge in the present situation, seems to me to be positive).

Finally, excellence in research means disseminating results. I think it’s there that we can currently find one of the biggest bottlenecks.  Relying exclusively on quantitative methods has its shortcomings. But introducing quality factors also has its dangers and seems potentially susceptible to arbitrary unfairness. That doesn’t mean to say criteria can’t be found.  Research in architecture is a tricky terrain,  and as such we need to advance with great care.

Nevertheless, any new channel that sets out to pick its way cleanly, optimistically and enterprisingly over this minefield is very welcome.


Text translated by Andrew V. Taylor
Por:
Arquitecto y docente; hace convivir la divulgación y enseñanza de la arquitectura, el trabajo en su oficina y el blog 'Múltiples estrategias de arquitectura'.

Deja un comentario

Tu correo no se va a publicar.

Últimos posts