evaluation method to determine ict usage...

13
EVALUATION METHOD TO DETERMINE ICT USAGE INDICATORS ON BRAZLIAN IFES Rogerio de Castro Melo (CEFET) [email protected] Leydervan de Souza Xavier (CEFET) [email protected] Ilda Maria de Paiva ALmeida Spritzer (CEFET) [email protected] The technology continuously advances to create better lifestyle conditions for the mankind. Individuals require increasing levels of product and service excellence and these new demands point in direction of a more and more intense technoloogic solutions development. Public and private organizations need to re-adapt to this new scenario presented by the globalization in order to improve their overall quality, reduce costs, explore the potential of their human resources and being transparent to society. The Information and Communication Technology - ICT are strategic to achieve these goals as they provide the tools to support production process improvement, to help decision taking and to attend the institutional mission. Education is another key element in this context as it works as social organization element to give the abilities and conscious criticism to individuals on creating new techniques or operating existing ones. In fact, a sustainable expansion of undergraduate education in Brazil is a part of a nation project. One factor that gives support to this movement is an undergraduate evaluation model that can be continuous. The objective of this article is to determine indicators of ICT use at Brazilian public undergraduate federal institutions relying on a program evaluation methodology. A sample of three undergraduate Brazilian federal institutions - IFES was chosen: Centro Federal de Educação Tecnológica Celso Suckow da Fonseca - CEFET/RJ, Centro Federal de Educação Tecnológica de Minas Gerais - CEFET/MG and Universidade Tecnológica Federal do Paraná - UTF/PR. This work also plans to give further contributions as these indicators can be included on the infrastructure including information and communication resources dimension of institutional self evaluation in addition to help ICT managers to have a diagnostic of ICT use into their institutions and better identify which areas need more attention. Palavras-chaves: ICT, evaluation, IFES XV INTERNATIONAL CONFERENCE ON INDUSTRIAL ENGINEERING AND OPERATIONS MANAGEMENT The Industrial Engineering and the Sustainable Development: Integrating Technology and Management. Salvador, BA, Brazil, 06 to 09 October - 2009

Upload: vuonghanh

Post on 08-Jan-2019

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: EVALUATION METHOD TO DETERMINE ICT USAGE …abepro.org.br/biblioteca/enegep2009_TI_ST_098_661_14592.pdf · EVALUATION METHOD TO DETERMINE ICT USAGE INDICATORS ON BRAZLIAN IFES . Rogerio

EVALUATION METHOD TO

DETERMINE ICT USAGE INDICATORS ON BRAZLIAN IFES

Rogerio de Castro Melo (CEFET) [email protected]

Leydervan de Souza Xavier (CEFET) [email protected]

Ilda Maria de Paiva ALmeida Spritzer (CEFET) [email protected]

The technology continuously advances to create better lifestyle conditions for the mankind. Individuals require increasing levels of product and service excellence and these new demands point in direction of a more and more intense technoloogic solutions development. Public and private organizations need to re-adapt to this new scenario presented by the globalization in order to improve their overall quality, reduce costs, explore the potential of their human resources and being transparent to society. The Information and Communication Technology - ICT are strategic to achieve these goals as they provide the tools to support production process improvement, to help decision taking and to attend the institutional mission. Education is another key element in this context as it works as social organization element to give the abilities and conscious criticism to individuals on creating new techniques or operating existing ones. In fact, a sustainable expansion of undergraduate education in Brazil is a part of a nation project. One factor that gives support to this movement is an undergraduate evaluation model that can be continuous. The objective of this article is to determine indicators of ICT use at Brazilian public undergraduate federal institutions relying on a program evaluation methodology. A sample of three undergraduate Brazilian federal institutions - IFES was chosen: Centro Federal de Educação Tecnológica Celso Suckow da Fonseca - CEFET/RJ, Centro Federal de Educação Tecnológica de Minas Gerais - CEFET/MG and Universidade Tecnológica Federal do Paraná - UTF/PR. This work also plans to give further contributions as these indicators can be included on the infrastructure including information and communication resources dimension of institutional self evaluation in addition to help ICT managers to have a diagnostic of ICT use into their institutions and better identify which areas need more attention. Palavras-chaves: ICT, evaluation, IFES

XV INTERNATIONAL CONFERENCE ON INDUSTRIAL ENGINEERING AND OPERATIONS MANAGEMENT

The Industrial Engineering and the Sustainable Development: Integrating Technology and Management. Salvador, BA, Brazil, 06 to 09 October - 2009

Page 2: EVALUATION METHOD TO DETERMINE ICT USAGE …abepro.org.br/biblioteca/enegep2009_TI_ST_098_661_14592.pdf · EVALUATION METHOD TO DETERMINE ICT USAGE INDICATORS ON BRAZLIAN IFES . Rogerio

2

1. Introduction The technology continuously advances to create better lifestyle conditions for the mankind. Individuals require increasing levels of product and service excellence and these new demands point in direction of a more and more intense technologic solutions development (VIEIRA PINTO, 2005). Public and private organizations need to re-adapt to this new scenario presented by the globalization in order to improve their overall quality, reduce costs, explore the potential of their human resources and being transparent to society.

The Information and Communication Technology - ICT are strategic to achieve these goals as they provide the tools to support production process improvement, to help decision taking and to attend the institutional mission.

However, even if it is a common sense that a good ICT infrastructure is necessary condition to support organization process development it is difficult to measure the impact of ICTs in organizational improvement, thus making more difficult to ICT managers identify the priorities on each area.

Education is another key element in this context as it works as social organization element to give the abilities and conscious criticism to individuals on creating new techniques or operating existing ones (VIEIRA PINTO, 2005). Adam Smith in The Wealth of Nations shows in his theory that work is not only a production factor but one of the types of capital: the human capital. This human capital generates economic growth and the worker productivity is directly proportional to technical and scientific knowledge gained throughout his life experience. So, the human capital must be improved not only to generate more wealth but also to reduce poverty and social inequality. The improvement of education systems is one of the ways to achieve this goal (IPEA, 2008).

However, education itself does not guarantee nation growth as there are some external factors to consider like economical and political scenario and technology evolution. As technology grows in a intense rhythm, the more capable individuals will be the ones that shows the higher scholarship. In Brazil, the undergraduates individuals have a better income versus scholarship relation than their peers with less years of school (IPEA, 2008).

One of the greatest challenges of the educational system is to include new technologies and the ICTs have a strategic role on supporting this goal (XAVIER; SPRITZER; MELO, 2006).

In fact, a sustainable expansion of undergraduate education in Brazil is a part of a nation project (DIAS SOBRINHO, 2002). One factor that gives support to this movement is an undergraduate evaluation model that can be continuous, formative and consider the regionalities and specificities of Brazilian undergraduate education institutions.

Schartzman (1992) also observes that the Brazilian educational system, mainly their public sector, needs to react to external pressure from the government and the society and an implementation of a transparent and clear evaluation model is a part of this process.

In this context, the technical rationality arguments in favor of local and global social-economic pressures in a way to implement global management systems based on general models and global performance indicators. But the local performance of each organization should be evaluated on its specificities thus discarding the indicators and other evaluation tools not created with these concepts in mind.

After several initiatives and debates about the evaluation of the undergraduate education in

Page 3: EVALUATION METHOD TO DETERMINE ICT USAGE …abepro.org.br/biblioteca/enegep2009_TI_ST_098_661_14592.pdf · EVALUATION METHOD TO DETERMINE ICT USAGE INDICATORS ON BRAZLIAN IFES . Rogerio

3

Brazil, the @@@education minister created the Sistema Nacional de Avaliação do Ensino Superior – SINAES. It has three evaluation instruments and the core of the system is the evaluation of undergraduate institutions called Avaliação das Instituições do Ensino Superior - AVALIES which has two parts: an institutional self evaluation and an external evaluation conducted by a specialist team (SINAES, 2004).

The self evaluation process is to be continuous, conducted by an institutional evaluation commission and it is a form of knowing the reality of the institution and its specificities (INEP, 2004b). The suggested report presents a group of dimensions that interact between each other and the one that is closer to ICTs is the infrastructure including information and communication resources dimension. However, the indicators shown on that dimension do not give a clear diagnostic map on how ICTs are being used on undergraduate institutions.

The objective of this article is to determine indicators of ICT use at Brazilian public undergraduate federal institutions relying on a program evaluation methodology. A sample of three undergraduate Brazilian federal institutions – IFES was chosen: Centro Federal de Educação Tecnológica Celso Suckow da Fonseca - CEFET/RJ, Centro Federal de Educação Tecnológica de Minas Gerais - CEFET/MG and Universidade Tecnológica Federal do Paraná – UTF/PR.

This work also plans to give further contributions as these indicators can be included on the infrastructure including information and communication resources dimension of institutional self evaluation in addition to help ICT managers to have a diagnostic of ICT use into their institutions and better identify which areas need more attention.

2. Evaluation 2.1 Evaluation history Evaluation brings to the beginning of human history where Neanderthal Man selected the best types of wood to make his tools (WORTHEN; SANDERS; FITZPATRICK, 2004). More than two thousand years ago, China used public selections for the government employees (DIAS SOBRINHO, 2002). The first quantitative measures about number of population, death rates and public healthy indicators are reported at 18th century.

The first initiatives to evaluate a major country-wide educational system took place on 1840 at United States. From this period to the beginning of 20th century several associations with educational institution credential purposes were created in United States (STUFFLEBEAM; SHINKFIELD, 2007). The world crisis of 1929 caused the creation of several federal government agencies to supervise from human resources allocation to health programs, thus opening huge opportunities for program evaluation (WORTHEN; SANDERS; FITZPATRICK, 2004). Around the decade of 1930, Tyler created the expression educational evaluation. However, on this period Dias Sobrinho (2002) observes that the educational evaluation systems were concerned just about the school profitability and efficiency and not about the curriculum development, making a parallel with the efficiency and productivity goals that were disseminated on United States industries at that time.

The social programs conducted by United States presidents John Kennedy and Lyndon Johnson by the decade of 1960 needed some kind of monitoring and it reflected in improvements at evaluation area. The recent history of educational evaluation had a milestone by 1965 with the approval of the United States Higher Education Act where educators needed to report to society on a process known as accountability (WORTHEN; SANDERS; FITZPATRICK, 2004).

Page 4: EVALUATION METHOD TO DETERMINE ICT USAGE …abepro.org.br/biblioteca/enegep2009_TI_ST_098_661_14592.pdf · EVALUATION METHOD TO DETERMINE ICT USAGE INDICATORS ON BRAZLIAN IFES . Rogerio

4

Since 1973, the evaluation theory experimented evolutions to be more professional and with relevant scientific production, leading to significant improvements on program evaluation area.

2.2 Evaluation concepts Evaluation plays a important role on a daily basis. People evaluate from simpler things like the type of haircut to more complex questions like the flow of investments on a specific program.

Between the various researchers there is not a final definition about evaluation and its objectives. Scriven (1967 apud WORTHEN; SANDERS; FITZPATRICK, 2004) gives a simple and objective definition where evaluation is to judge the value or merit of something. Worthen, Sanders and Fitzpatrick (2004) provide a more complex definition where evaluation is the “identification, esclarecimento and application of defensible criteria to determine the value (or merit), the quality, the utility, the effectiveness or the relevance of the evaluated object related to these criteria”. Dias Sobrinho (2002) shows that evaluation must have an active conotation as it needs to consider not only the final results but also the inputs, stakeholders, processes involved and boundary conditions. Carvalho (2009) indicates the relevance of the evaluation on tracing a map of the reality and as critical success factors - CSFs. The researcher also defends that environment conditions must be considered on every evaluation process.

Other researchers state that evaluation is a feedback process to the organization that allows establishment of a link between performance and knowledge thus helping on error detection, lessons learned and creates conditions for decision taking that improves process (IPEA, 1999).

Research and judge methods used by evaluations include: the determination of standards to measure the quality and verify if these standards must the relative or absolute to each other; the gather of relevant information; and the application of suggested patterns for determination of value, quality, utility or relevance of the evaluated object. The final result leads to a series of recommendations that intend to improve the evaluated object (WORTHEN; SANDERS; FITZPATRICK, 2004).

Dias Sobrinho (2002) reminds that the evaluation process has political effects as it can be used as an empowerment tool and to define government strategies, as an example. Talmage (1994 apud WORTHEN; SANDERS; FITZPATRICK, 2004) defends that the political environment should be considered in the evaluation process as well.

Dias Sobrinho (2002) also observes that the evaluation must have multiple dimensions as it encompasses different formats, contents and functions and points to a path that allows to accomplish various goals.

The ethics and confidentiality of the data generated by an evaluation process are also reminded by Carvalho (2009), Dias Sobrinho (2002), Worthen, Sanders and Fitzpatrick (2004) and Stufflebeam and Shinkfield (2007) whom believe that the ethic and transparency needs to be inserted in the evaluation process as a whole.

The evaluation usage has its limits and several evaluation studies have not improved their programs due to misuse of adopted criteria or results. To avoid this situation, must be clear to customers and to the evaluator that the evaluation process is a part of a broader context that seeks continuous process improvement. The evaluator cannot promise more than the evaluation is capable to provide. The results provided by the evaluation are a suggestion and

Page 5: EVALUATION METHOD TO DETERMINE ICT USAGE …abepro.org.br/biblioteca/enegep2009_TI_ST_098_661_14592.pdf · EVALUATION METHOD TO DETERMINE ICT USAGE INDICATORS ON BRAZLIAN IFES . Rogerio

5

those implementation is a responsibility of the organization administrators (WORTHEN; SANDERS; FITZPATRICK, 2004).

Although there are different visions concerning the evaluation definition and objectives, various authors agree about evaluation types.

The formative evaluation concept is driven to the fact that the evaluation is supposed to offer information that will help program improvement mainly during its implementation or modification, on a periodic basis and uses smaller samples (WORTHEN; SANDERS; FITZPATRICK, 2004). Gouveia (2005) observes that the formative evaluation offers a diagnostic perspective of the existing situation thus supporting the institutional development.

The summative evaluation is used when a decision must be made about program continuity or expansion. Normally they are performed on the later stages of the program, on a non periodic basis and uses larger samples (WORTHEN; SANDERS; FITZPATRICK, 2004).

Both types of evaluation complement each other and, in practice, it is often difficult to differ one from another. Figure 1 shows the relative emphasis of both evaluation types during program lifecycle.

Source: WORTHEN; SANDERS; FITZPATRICK,2004

Figure 1 – Relationship between formative and summative evaluations and program lifecycle

Other important concepts at evaluation field are related to the internal and external dimension types. The internal evaluation is conducted by the program team itself. The external evaluation is performed by one or more external specialists. The former evaluation type has an advantage related to the high level of knowledge the internal program team has concerning the evaluated object. In the other hand, the external evaluators are naturally more independent to the evaluation object and thus can bring their field experience to the process and observe factors not seem by the internal evaluators (WORTHEN; SANDERS; FITZPATRICK, 2004).

Trends on evaluation area also indicate an increasing use of the qualitative methods in the process. Those measures are less objective than quantitative ones but they function as a complement to these and they can capture factors of great value to the organization, not shown by the traditional methods (WORTHEN; SANDERS; FITZPATRICK, 2004).

The use of several methods on an evaluation process like surveys, interviews and evaluated object observation represent other important characteristics of modern evaluations (WORTHEN; SANDERS; FITZPATRICK, 2004). The authors agree with Stufflebeam and Shrinkfield (2007) about the need of a theory-based evaluation process.

Page 6: EVALUATION METHOD TO DETERMINE ICT USAGE …abepro.org.br/biblioteca/enegep2009_TI_ST_098_661_14592.pdf · EVALUATION METHOD TO DETERMINE ICT USAGE INDICATORS ON BRAZLIAN IFES . Rogerio

6

Researchers also show that an evaluation report should be concise, directed to the correct stakeholders and need to have the results showing the weakness and strongest points of the evaluated object as well as recommendations on how to improve those aspects that need more attention.

3. Evaluation methodology used to determine ICT usage indicators After a bibliography revision, the evaluation methodology proposed by Worthen, Sanders and Fitzpatrick (2004) was the one selected for this study. Reasons this method was selected include the coverage it has to be applied on a broad set of disciplines and the existence of several aids to help on defining key evaluation aspects like the need for an evaluation and an external evaluator and the objectives that must be met by evaluation. Other ICT related studies like Campus Computing Report.Br (LITTO, 2004), Indicadores e Métricas para avaliação de e-Serviços (BRASIL, 2007) and The Connectivity Scorecard (NOKIASIEMENS, 2008) does not show completely their evaluation methodology. Also, the Campus Computing Report.Br presents ICT indicators for Brazilian undergraduate institutions bnt not specifically for the IFES and the other two studies are not related to education institutions. The Handbook on Constructing Composite Indicators: Methodology and User Guide (OCDE, 2005) is a very complete study but it suggests very complex statistic calculations that were considered beyond the scope of the present work and it is not directly related to ICTs.

As there was little or no information about ICT usage on the institutions studied, the emphasis of the work was to build a formative evaluation that would give a diagnostic of the ICT usage situation on each institution.

The first step of the methodology was to define the real need for an evaluation. To do this, there are some questions that need to be answered. The main topics include if there is a legal prerequisite for the evaluation, if the object has relevance to justify a formal study, if there was sufficient human and financial resources to do the process and if the stakeholders agree on how the evaluation results would be used. As all the answers were affirmative, the evaluation was really needed in this case.

Second step was related to the need for an external evaluator. In this case, the main questions were if it was important to the study to have an external perspective, if there was a external evaluator available and with the appropriate skills to conduct the process, if there was an internal evaluator available and if there was financial funds to sponsor the external evaluator. In this case, it was considered important to have an external perspective in the evaluation. Altought there were no funds planned to sponsor an external evaluator, he is one of the authors of the study, so it was not an issue. Also, the author is a 15-year ICT experienced consultant and was available for the purpose of this study. The internal ICT team on each institution was very busy according their managers, so it was considered that no internal evaluators were available. This can be considered another reason to choose an external evaluator.

After that, there was a need to identify the objectives, the context and the public interested on the evaluation as well as if there were a previous evaluation on this subject. The objective was to determine indicators of ICT usage on IFES. The context was related to the strategic role the ICTs play at those institutions that need to account for their acts to the government and to the society. The public interested in the evaluation was identified to be the ICT managers of each institution. There was no previous evaluation on this subject on any of the institutions studied.

Page 7: EVALUATION METHOD TO DETERMINE ICT USAGE …abepro.org.br/biblioteca/enegep2009_TI_ST_098_661_14592.pdf · EVALUATION METHOD TO DETERMINE ICT USAGE INDICATORS ON BRAZLIAN IFES . Rogerio

7

It was decided to choose the sample of three IFES: Centro Federal de Educação Tecnológica Celso Suckow da Fonseca - CEFET/RJ, Centro Federal de Educação Tecnológica de Minas Gerais - CEFET/MG and Universidade Tecnológica Federal do Paraná – UTF/PR, because of the proximity to the evaluators home address - the first one was located at Rio de Janeiro, Brazil, in the same city the evaluator resides and the others were located about one hour flight from Rio de Janeiro – and the ease of access to the ICT managers. Another reason was that the author is a masters student of the CEFET/RJ. Also, those three institutions are very similar in size and they are members of a common Brazilian federal government technological group of institutions called CEFETs.

After defining that the evaluation and an external evaluator were needed, the objectives and the scope of the study, an email letter was sent to each ICT manager inviting to the study. As they accepted another email letter was sent regarding the compromise of the authors and related parties on the confidentiality of the data obtained during the research.

The first step on determining a diagnostic of ICT usage on IFES was the named divergent phase of the evaluation. According to Worthen, Sanders and Fitzpatrick (2004), this phase consists on an investigative process to identify a broad set of questions related to the theme. It can use information sources like interviews with the interested public, questions previously used and evaluator judgment habilities.

For this phase, it was decided to that a good starting point was to use a set of existing questions of the Campus Computing Report.BR survey that shows various aspects of ICTs on undergraduate Brazilian institutions. The evaluator, based on its ICT field experience and observation of the evaluated object (the author is masters degree student of one of the institutions), removed a series of questions, consolidated others to different sections and created new question items. A first meeting was conducted with the ICT manager of CEFET/RJ. The summary of the modifications after the divergent phase is shown on Table 1.

Page 8: EVALUATION METHOD TO DETERMINE ICT USAGE …abepro.org.br/biblioteca/enegep2009_TI_ST_098_661_14592.pdf · EVALUATION METHOD TO DETERMINE ICT USAGE INDICATORS ON BRAZLIAN IFES . Rogerio

8

Table 1 – Summary of changes during evaluation divergent phase

As can be observed, there was a reduction of about 25% during the divergent phase. Aproximately 16% of the questions were modified and 5% created. These numbers shows the relevance of the evaluator experience and the meeting with ICT manager of CEFET/RJ.

After those changes, the evaluation convergent phase tooks place. According to Worthen, Sanders and Fitzpatrick (2004), this phase consists on filtering the questions obtained on the divergent phase in order to consider only the main aspects of the evaluation as restrictions of budget and time to analyze the questions should be considered. The researchers suggest an aid with questions that help to decided which questions will be the more relevant and which ones can be discarded and it is shown on Table 2.

Evaluation

questions

The evaluation question: 1 ... N

1. Would Interest to key public?

2. Would reduce the actual doubts ?

3. Would generate relevant information?

4. Would be of permanent interest?

5. Would be critic to the study?

6. Would have impact on the study?

7. Would be answered in terms of:

Evaluation areas

Original number

of questions

Questions created on this section

Questions modified on this section

Questions moved to

this section

Questions moved to

other section

Questions removed from this section

Number of questions

after divergent

phase 1) General IT and computational policies on campus 13 0 2 0 7 6

2) IT infrastructure, networks and Internet 29 0 8 1 1 29 3) Academic IT policies 41 0 1 0 14 27 5) IT investiments 3 1 2 6) IT Strategic Planning 8 3 1 5 6 7) About Free software 4 4

8) Networks and Academic Portals 6 1 0 1 1 5 2

104 76

Page 9: EVALUATION METHOD TO DETERMINE ICT USAGE …abepro.org.br/biblioteca/enegep2009_TI_ST_098_661_14592.pdf · EVALUATION METHOD TO DETERMINE ICT USAGE INDICATORS ON BRAZLIAN IFES . Rogerio

9

a) Human and financial resources?

b) Time?

c) Methods and avaliable technology?

Source: WORTHEN; SANDERS; FITZPATRICK,2004

Table 2 – Array to classify and select evaluation questions at convergent phase This phase can be considered as a critical success factor to the evaluation process and the interaction between evaluator and interested public is a key to the success of the evaluation. For this reason, the customer and the evaluator should be both responsible to decide which questions will appear on the final evaluation (WORTHEN; SANDERS; FITZPATRICK, 2004). This also helps to verify the local specificities and needs of each institution.

With these concepts in mind, after applying this table to the questions generated at divergent phase, the convergent phase began with the external evaluator locally visiting each institution, observing the ICT infrastructure, interviewing each ICT manager and some ICT technicians. Each question item was discussed with the ICT managers and some questions were removed, some modified and a few created. The summary of results is shown on Table 3.

Table 3 – Summary of changes during evaluation convergent phase

Evaluation areas

Original number

of questions

Questions created on this section

Questions modified on this section

Questions moved to

this section

Questions moved to

other section

Questions removed from this section

Number of questions

after convergent

phase 1) General IT and computational policies on campus 6 4 5 1 1 10 2) IT infrastructure, networks and Internet 29 2 4 25 3) Academic IT policies 27 7 3 24 5) IT investiments 2 1 1 1 6) IT Strategic Planning 6 1 3 7

7) About Free software 4 4 8) Networks and Academic Portals 2 2 0

76 5 18 1 2 9 71

Page 10: EVALUATION METHOD TO DETERMINE ICT USAGE …abepro.org.br/biblioteca/enegep2009_TI_ST_098_661_14592.pdf · EVALUATION METHOD TO DETERMINE ICT USAGE INDICATORS ON BRAZLIAN IFES . Rogerio

10

Analyzing Table 3, around 25% of the questions were modified and 7% new questions shown up. This shows the relevance of the local visit, interviews and discussions conducted during the convergent phase.

After the final questions were defined, it was decided to apply a survey online and use the Internet. To reduce costs, some free survey sites were evaluated in terms of ease of use, confidentiality and report results. The selected site was Esurveyspro.com . The survey was posted on their website and an email invitation was sent to the ICT managers of each institution. A password to access the online survey was sent in a separate email.

4. ICT usage indicators on IFES After collecting the data, the evaluation methodology phase was finished and began the process of building ICT usage indicators. Other studies like Indicadores e Métricas para avaliação de e-Serviços (BRASIL, 2007) and The Connectivity Scorecard (NOKIASIEMENS, 2008) also use an evaluation metodology to define ICT indicators.

Initially, a research of the institutional objectives of the IFES was made. Those objectives were obtained at Institutional Development Plan for each institution are: Offer courses at undergraduate, technical and graduate levels; offer continuously education; perform research; extension activities development.

At the infrastructure including information and communication resources dimension of SINAES some directions to ICT indicators were found:

- Infrastructure adoption (computer and data network) to support teaching and research

activities;

- Institutional policies related to conservation, upgrade, security and usage of the equipments

to the fulfill objectives;

- Infrastructure utilization to develop innovative pedagogic practices.

As those indicators are not very clear, at least they give some tips concerning the ICT infrastructure and their policies of usage as well as a need for a strategic vision to deal with the innovative pedagogic practices.

Taking these directions as a basis, the survey question items were redistributed on four sections that comprises the ICT usage indicators: General ICT usage policies, ICT infrastructure, Academic ICT usage policies and ICT Strategic Planning.

These indicators were correlated to the institutional objectives as it is shown in Table 4.

Institutional objectives ICT indicators that support the institutional

objective

Offer courses at undergraduate, technical and graduate levels

General ICT usage policies ICT infrastructure Academic ICT usage policies ICT Strategic Planning

Offer continuously education

General ICT usage policies ICT infrastructure Academic ICT usage policies

Page 11: EVALUATION METHOD TO DETERMINE ICT USAGE …abepro.org.br/biblioteca/enegep2009_TI_ST_098_661_14592.pdf · EVALUATION METHOD TO DETERMINE ICT USAGE INDICATORS ON BRAZLIAN IFES . Rogerio

11

Perform research

General ICT usage policies ICT infrastructure Academic ICT usage policies ICT Strategic Planning

Extension activities development

ICT infrastructure, Academic ICT usage policies ICT Strategic Planning

Table 4 – Mapping of institutional objectives to ICT indicators

Each indicator has a series of criteria created by the questions of the survey applied. Each

criteria had points ranging from zero to one depending on the survey answers. The sum of

points on each criteria gives a total of point of each ICT indicator. This gives to the ICT

Manager a way to check which areas need more attention and what areas are stronger.

An evaluation report was delivered to each ICT manager showing the results of the three

institutions, their strong points and vulnerabilities and some suggestions on how to deal with

those. In the report, institutions were identified as IFES A, IFES B and IFES C to maintain

confidentiality. Later, each ICT manager received the info regarding what was its institution

on the report.

The details of the ICT indicators criteria and about the values that were given to each criteria

are beyond the scope of this article.

5. Conclusions A theory driven evaluation method is a great tool on helping to build ICT indicators. The

evaluation should be a form of organizational learning to known the reality and how to

improve it. Should never be a form of comparision or punishment.

Each educational institution is subject to externalities and thus these factrs should be

considered on a evaluation process with observation of the object and interviews with the

proper stakeholders. SINAES is a step in this direction but further discussions are necessary

so each institution can identify its ICT needs.

Evaluation and ICTs can work together as tools to reduce the social inequity and reduce

poverty in Brazil.

References BRASIL. Indicadores e métricas para avaliação de e-Serviços. Brasília: Ministério do

Planejamento, Orçamento e Gestão/Departamento de Governo Eletrônico, 2007.

CARVALHO,M.B. A3 – Metodologia de avaliação e construção de indicadores. Rio de

Janeiro: Ciência Moderna, 2009.

Page 12: EVALUATION METHOD TO DETERMINE ICT USAGE …abepro.org.br/biblioteca/enegep2009_TI_ST_098_661_14592.pdf · EVALUATION METHOD TO DETERMINE ICT USAGE INDICATORS ON BRAZLIAN IFES . Rogerio

12

CEFET/RJ. Plano de Desenvolvimento Institucional do Centro Federal de Educação

Tecnológica Celso Suckow da Fonseca 2005-2009. Rio de Janeiro: CEFET/RJ, 2005.

CEFET/MG. Plano de Desenvolvimento Institucional do Centro Federal de Educação

Tecnológica de Minas Gerais 2005-2010. Minas Gerais: CEFET/MG, 2005.

DIAS SOBRINHO, J. Avaliação: políticas educacionais e reformas da educação superior.

São Paulo: Cortez, 2002.

INEP. SINAES – Sistema Nacional de Avaliação da Educação Superior: da concepção à

regulamentação. 2ª. Edição. Brasília:INEP, 2004.

INEP. SINAES – Roteiro de Auto-Avaliação Institucional: Orientações Gerais. Brasília:

INEP, 2004.

IPEA. Brasil: o estado de uma Nação. Available at <http://en.ipea.gov.br>. Access on 30

dez. 2008.

ITU-T. Digital Opportunity Index. Available at <http://www.itu.int/ITU-

D/ict/doi/index.html>. Access on 30 nov. 2008.

LITTO, F.M. Campus Computing Report.Br 2004: computação e tecnologia da

informação nas instituições de ensino superior no Brasil. São Paulo: Altana, 2004.

MELO,R.C.; CASTANHEIRA,Maurício ou NEVES,Antonio M.C.. Proposta para melhoria

do processo de avaliação de treinamento em uma empresa de ensino em Tecnologia da

Informação. In: XXXV Congresso Brasileiro de Ensino de Engenharia. Curitiba: ABENGE –

Unicenp, 2007.

NOKIASIEMENS. The Connectivity Scorecard. Available at

<http://www.nokiasiemensnetworks.com/global/IndustryThemes/ConnectivityScorecard/Con

nectivityScorecard.htm>. Access on 30 nov. 2008.

OCDE. Handbook on Constructing Composite Indicators: Methodology and User Guide.

Organisation for Economic Co-operation and Development, 2005.

PEIXOTO, J.A.A. et al. Como Interagir Razão e Reflexão na Avaliação dos Desempenhos

dos Cursos de Engenharia. In: XXXII Congresso Brasileiro de Ensino de Engenharia.

Brasília: ABENGE - UnB, 2004.

SINAES. Portal do SINAES. Available at <http://sinaes.inep.gov.br>. Access on 30 dez.

2008.

SPRITZER, I.M.P.A.; XAVIER, L. S.; MELO, R. C. A infraestrutura de tecnologia da

informação como facilitadora da modernização do ensino nas instituições de educação

Page 13: EVALUATION METHOD TO DETERMINE ICT USAGE …abepro.org.br/biblioteca/enegep2009_TI_ST_098_661_14592.pdf · EVALUATION METHOD TO DETERMINE ICT USAGE INDICATORS ON BRAZLIAN IFES . Rogerio

13

superior públicas do Brasil. In: XXXIV Congresso Brasileiro de Ensino de Engenharia.

Passo Fundo: ABENGE-Universidade de Passo Fundo/RS, 2006.

STUFFLEBEAM, D.L.; SHINKFIELD, A.J. Evaluation Theory, Models and Applications.

American Journal of Evaluation (Vol. 28, nº 4.), pp. 573-576. EUA/Califórnia: Sage

Publications Inc., 2007.

UTF/PR. Plano de Desenvolvimento Institucional da Universidade Tecnológica Federal

do Paraná 2004-2008. Paraná: UTF/PR, 2004.

VIEIRA PINTO, A. O Conceito de Tecnologia. Rio de Janeiro: Contraponto, 2005.

WORTHEN,B.R.; SANDERS,J.R.; FITZPATRICK,J.L. Avaliação de programas:

concepções e práticas. São Paulo: Gente, 2004.