The Department for International Development (DFID) and
the British Aid for Small Enterprises (BASE) are supporting
micro-finance projects in Kenya. The goal of the projects as
set out in the logical framework is to provide additional
employment and self-employment opportunities, especially for
poorer people and increase their incomes through improvement
in the production capacity oftheir micro-enterprises. For this
goal to be attained, the capacity ofprivate sector intermediary
micro-finance institutions to promote micro and small
enterprises (MSEs) on a sustainable basis is being developed.
Indicators that help to measure progress toward attainment of
the goal such as number of jobs created by the MSEs and
growth ofcapacity ofmicro-finance institutions have been spelt
out. However, to know how far this goal is being attained,
impact assessment needs to be carried out.
This paper examines key issues that need to be borne in mind
by those carrying out impact assessment. It considers the
conceptual framework that guides assessment, research design,
methods and techniques, gender relations and the problems of
attribution and fungibility.
Noponen (1997:3) holds that most organisations establish
monitoring and evaluation systems to help them learn from
their experience and use the experiences to improve their
performance, expand their operations or adapt some of their
operations to local situations.
Evaluation has been defined by Scriven (1967), Glass (1969)
and Stufflebeam (1974) as the assessment ofmerit or worth of
a programme. The Joint Committee on Standards of Evaluation (1981) defined evaluation as the systematic investigation of
the worth or merit of some object. Suchman (1967:7) saw
evaluation as referring to the processes of assessment or
appraisal of value. According to Linchfield et al. (1974:4),
appraisal refers to the process of analysing a number of plans
or projects with a view to searching out their comparative pros
and coris and the act of setting down the findings of such
analysis in a logical framework.
These definitions show that the concepts "evaluation",
"assessment" and "appraisal" are synonymous and are used
The concept "impact assessment", which is widely used in the
literature on micro-enterprise refers to a type of evaluation or
assessment that focuses on outcomes or effect ofa programme
(Oakley, 1987:31). Goldmark and Rosengard (1981: 10) see
impact evaluation as referring to the assessment of a smallscale
enterprise's effect on its intended population. The
assessment entails an analysis ofthe enterprise's viability and
its interaction with and influence on the community as an
outcome of an external programme ofassistance. Goldmark
and Rosengard caution that impact evaluations should not only
describe financial or managerial changes occurring within the .
micro-enterprise and how far the changes are meeting.
developmentobjectives, but also observe the changes that have
taken place in the community.
Impact evaluation studies have become popular with donors·
and, as a corollary, have become a.significant component of
donor funding and, consequently, of recipient institutions
(Hulme, 1997).· Their objectives are:
to figure out the effects ofintervention in changing the
conditions facing the target population (Oketch et aI.,
b) to objectively justify continuing support to MSEs and
also validate their choice of given modes of
c) as a stage in the project planning process, evaluation
seeks to provide information pertaining to important
implications of the planning process, i.e., it helps to
establish what happened where particular options were
taken up, whether anticipated effects occurred, who
gained or lost, when the effects occurred and the
efficiency of the investment in relation to resources
used and benefits derived (Linchfield, et ai. 1974).
To achieve these objectives, donors seek more information
about programme effectiveness rhan is readily available from
routine impact and monitoring systems ofrecipient institutions.
Besides measuring the efficacy of programmes, donors often
emphasise impact evaluations to meet the accountability
demand oftheir home governments and thus justify continued
support. To this extent, impact evaluations tend to be donordriven
(Hulme, 1997). Donor institutions such as DFID have
legitimate interest in measuring programme impacts as in the
case of the REME project.
Impact evaluation also exposes internal problems and
constraints~ and provides benchmark information for
comparing, ranking and selecting sets ofappropriate methods
(REME Project Proposal, 1997).
These objectives place high demands on the quality and
accuracy of data. However, given" the context of developing countries (limited numbers of professional researchers, few
written records, illiteracy, communication problems, lack of
respondent motivation and limited budgets), such evaluations
might not generate accurate measurements of impacts, and
caution has to be exercised when they are performed.