0%

https://africa-rising.net/wp-content/uploads/2019/03/banner-sample.png

Blogs

In this interview, Carlo Azzarri introduces himself and his work with Africa RISING. It is one of a series of portraits of key people in the program.
Tell us about your background
I am a research fellow at the International Food Policy Research Institute (IFPRI) in the Environment and Production Technology Division (EPTD). I joined IFPRI in September 2011 and have been in charge of the Monitoring and Evaluation (M&E) component of Africa RISING officially since December 2012, although IFPRI’s involvement in the program started even before the first inception workshops.

Carlo Azzarri, monitoring and evaluation coordinator, IFPRI (Photo credit: IFPRI)
Carlo Azzarri, monitoring and evaluation coordinator, IFPRI (Photo credit: IFPRI)

My background is in development economics with major in econometrics. I studied at the University of Rome. The reason why I was brought in Africa RISING is that I have some experience in leading M&E – especially impact evaluations of development projects – analyzing large micro-level datasets, and conducting large multi-topic household and community surveys in developing countries with the World Bank and the Food and Agriculture Organization (FAO).
What do you do in your current position?
We are trying to set up a monitoring system to report (on the nine chosen indicators) to the United States Agency for International Development (USAID). We are trying to build a user-friendly web interface, under development) so that the research teams and mega-site coordinators can monitor these indicators, overlay their values with a suite of biophysical and socio-economic characteristics at a fine spatial resolution level available at HarvestChoice, and upload any documentation relevant to the program.
Our second objective is to try and provide a consistent evaluation (using impact assessment as well as other methods) across the three mega sites. It’s a challenge because it’s one of the first agricultural research for development (AR4D) projects that tries to provide an economic evaluation.
I am coordinating the team and we are focusing our current activities on the household and community surveys in all Africa RISING countries, looking at a lot of quantitative indicators and outcome variables so as to provide a baseline values of all the relevant characteristics and targets.
It is fascinating to work with CGIAR colleagues from different centres with diverse backgrounds. It’s challenging sometimes but undoubtedly very enriching for my professional development to interact with the non-economists in the group.
What are your plans for Africa RISING?
The main goal is to provide guidance to our program coordinators about the performance of each intervention – which is very difficult – and to evaluate in a solid way the AR4D approach. It’s all new research for us because there is very little literature on this topic: a special issue in Food Policy in February 2014 focused on assessing AR4D interventions mostly through impact evaluation methods; it’s a very recent topic and we would like to be part of the recent literature on this. It’s one of my primary goals.
The other important goal is to convince the program donor for a second phase of the project, thanks to our statistically sound evaluation. If we do it carefully, we can show how the project has performed over the initial four-five years. We may end up with success or failure but the crucial aspect for us is to measure it rigorously. It is a learning phase for us all and we are trying to generate knowledge on the performance of the project with the ultimate objective to impact the livelihood of the poor.
What are the biggest Africa RISING challenges and how do we deal with them?
The two biggest challenges I see are interrelated:

  • We have embarked on an AR4D program but USAID is a development, not a research institution. The donor has tasked many CGIAR centres to provide independent, decentralized, [participatory, demand-driven research interventions on the ground but has contracted IFPRI to provide a coherent and unified M&E framework. This leads to the other challenge…
  • Our CGIAR colleagues are not economists (they are breeders, plant pathologists, agronomists, etc.) so we have different methods, views, and tools on how to evaluate performances, using also different metrics. The challenge is precisely to integrate the two approaches. Economists conducting evaluations are often too constrained by their background to look at the broader options for evaluation. On one hand, they should put more efforts in understanding agronomist practices, and the different delivery mechanisms through which the interventions spread spatially. On the other hand, agronomists should understand and approve the economic and statistical methods to go beyond input-output measures.

One important contribution of economist in that direction is that our actions are driven by (and founded on) statistical principles. We want to say something about the general validity of our conclusions regarding the effects of the treatment provided; that is the biggest challenge.
The contribution of agronomists is that they have a better understanding of how proper research adoption should proceed and how we should look at metrics related to bio-physical science. Taken together we can provide insights about the effects of technology adoption on one side, and on farm inputs/outputs and farmers’ wellbeing on the other.
Anything else you want to share?
I feel sorry about the fact that we built an extremely long and tedious household questionnaire (taking about four hours on average for farmers to complete). Every time we present it to the research teams, the reaction is that the questionnaire is overly long; nevertheless, every time they ask: “can you add this important module for us?”. Some of the questions are perhaps less relevant to them than what they would like to ask but we have to integrate the two approaches in one questionnaire. Trying not to confuse farmers. Hence, it’s quite a task to juggle around those needs. And, in addition, we do not always know which information our agronomist colleagues would like to solicit, so that’s why we try to be as comprehensive as possible, but at a quite substantial cost.

Latest Comments

Kebebe
May 1, 2014, 10:08 pm
I sympathise with Carlo about the frustration his team encountered in reconciling divergent expectations. I also see the misunderstanding of biophysical scientists about statistical requirements (e.g., sample size, keeping key variables, framing of the questions, etc.) when you work on observational data. Biophysical scientists are used to work on data generated in controlled experiments. Very small sample size ( the magic number 30) and few variables are enough in experimental setting.That is not the case in economics dealing with survey data. For a biophysical scientists, it is not easy to understand the importance and meaning of some variables. Either they have to attend a lot of courses in econometrics or spend many years with economists, which is almost impossible. On the other hand, economists have a taste to include a lot of variables in one questionnaire. Some variables are a must, but many of them are not necessarily relevant for that particular project. They must admit that the motive is to get data on more variables so that they can write a lot of papers in future. In the end, the farmers have to suffer answering non-ending questions. Consequently, the researchers get poor quality data and the process of torturing crap dataset continues for very long time. Finally they end up with few publications and dumping useless dataset. From my experience, the solution lies in defining the roles and demarcating the level of involvement of scientists of different backgrounds. Trying to bring interdisciplinary team of scientists to equal level of understanding in every issue is waste of time and resources. Once the general understanding of each component and role of each team is established, each team should be able to work on what the team is best.
Carlo
May 22, 2014, 8:14 pm
Hi Kebebe, Thanks for your comments, on which in general I agree. During our recent annual Institution Retreat at IFPRI we organized two sessions on how economist and agronomist can work together better. The lesson learned were the following (apologies if they are somehow trivial): "Participants in both sessions concluded that the main ingredient for a successful project is integration of approaches and methods. Professionals conducting evaluations are often too constrained by their background to look at the broader options for evaluation. On one hand, economists should definitely increase their effort to deeply understand agronomists’ practices and the different delivery mechanisms through which the interventions spread spatially. On the other hand, agronomists should understand and approve the economic/statistical methods to look beyond physical input-output measures to embrace social science impact evaluation approaches, whose aim is to detect farmers’ behavioral changes. One important contribution of economists on this front is that their actions are driven by and founded on statistical principles whenever statements on the general validity of the treatment/project are sought. Agronomists, meanwhile, provide an understanding of how proper adoption should proceed. Taken together, their perspectives can provide insight into barriers to the proper use of technology and the effects of technology adoption on farm’s input and output matrix as well as farmers’ well-being." Coming back to your point, I am frankly against vexing farmers with a lot of questions, sometime unnecessary. The problem in our case is that our surveys need to respond to multiple objectives, not only regarding the evaluation but also project management issues. In addition, objectives and outcome variables the project will be assessed against are still unclear, so we were forced to design a very comprehensive instrument to prevent missing important information that our CG colleagues would like to see in the future. We would be missing the opportunity to get baseline data otherwise. Also, in each and every country we fielded the survey, an in-depth discussion happened among all the stakeholders on the content of each instrument over a full two-day meeting by looking at each proposed question. So, there was no imposition of a particular questionnaire/question from our team, as it was a rather participatory approach with our CG colleagues and research teams who conduct the intervention on the ground.

Leave a Comment

Begin typing your search above and press return to search.