base de datos ebsco

124
UNIVERSIDAD DEL CARIBE 2014 Sandra Gómez Pérez Innovación Empresarial Turno Matutino

Upload: sandyg

Post on 17-Mar-2016

246 views

Category:

Documents


0 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Base de datos EBSCO

UNIVERSIDAD DEL CARIBE

2014

Sandra Gómez Pérez Innovación Empresarial

Turno Matutino

Page 2: Base de datos EBSCO
Page 3: Base de datos EBSCO

54 El profesional de la información, 2013, enero-febrero, v. 22, n. 1. ISSN: 1386-6710

Gestión de los servicios de tecnoloGías de la información: modelo de aporte de valor basado en ITIL e ISO/IEC 20000

María-Carmen Bauset-Carbonell y Manuel Rodenes-Adam

María-Carmen Bauset-Carbonell es doctora en informática por la Universidad Politécnica de Va-lencia (UPV). Desde 2009 es gerente de servicios ITIL/Arquitectura en Indra. Especialista en redes corporativas y sistemas integrados (UPV). Microsoft Certified Systems Engineer. Especialista en sistemas de gestión de la seguridad de la información (Aenor). Delivery of IT Services Professional ISO/IEC 20000 (EXIN). Certificaciones en ITIL (EXIN): SOA (Service offering and agreements); PPO (planning, protection & optimisation); OSA (operational support and analysis); RCV (release, con-trol and validation).http://orcid.org/0000-0002-2305-9596

Indra, Servicios ITIL Sistemas Internos Avda. Cataluña, 9 – entr. 46020 Valencia, España

[email protected]

Manuel Rodenes-Adam, doctor ingeniero industrial, es catedrático de organización de empresas en la Universidad Politécnica de Valencia (UPV), director del Master de Consultoría ITIO (Integra-ción de las TIC en las organizaciones) y del Grupo de I+D+i ITIO. Imparte clase en universidades de España y Colombia; fue profesor visitante de la New York University, y visiting scholar de las universidades de Minnesota y Missouri.http://orcid.org/0000-0002-9059-6674

Universidad Politécnica de Valencia, Depto. de Organización de Empresas, Edif. 7D, 2ª pl.Camino de Vera, s/n. 46022 Valencia, España

[email protected]

ResumenDescripción de un modelo de indicadores de gestión de los servicios de tecnologías de la información (TI) proporcionados a una organización, basado en los indicadores de gestión tangibles e intangibles obtenidos de los 13 procesos que componen la norma ISO/IEC 20000 e ITIL (Information technology infrastructure library o marcos de trabajo de referencia de la gestión de TI). El modelo se ha contrastado empíricamente sobre los más de 90 servicios de TI que se prestan desde la Dirección de Sistemas Internos de Indra a los demás departamentos. Los resultados obtenidos ponen de manifiesto una vez más que para aportar valor a una organización los servicios de TI deben gestionar eficientemente la disponibilidad, continuidad y ca-pacidad de los equipos, controlar los cambios, mejorar los tiempos de respuesta de resolución de los incidentes, y procurar la satisfacción del cliente.

Palabras claveAporte de valor, Valor añadido, TI, Tecnologías de la información, TIC, Information technology infrastructure library (ITIL), ISO/IEC 20000, Control objectives for information and related technology (Cobit), Information technology service manage-ment (ITSM), Sistema de gestión de servicios de TI (Sgsit).

Title: Information technology services management: a value-added applied model based on ITIL and ISO/IEC 20000

AbstractDescription of a management indicators model for information technology (IT) services provided to an organization, based on tangible and intangible management indicators obtained from the 13 processes that make up the ISO/IEC 20000 and In-formation Technology Infrastructure Library (IT management framework references). The model has been empirically tested on more than 90 IT services provided by the Internal Systems Division of Indra to the organization. The results obtained show once again that to add value to an organization, its IT services must efficiently manage equipment availability, continuity and capacity and control changes, both to improve the response time of incident resolution and to ensure customer satisfaction.

KeywordsAdded value, ICT, IT, Information technology infrastructure library (ITIL), ISO/IEC 20000, Control objectives for information and related technology (Cobit), Information technology service management (ITSM).

Artículo recibido el 24-06-2012Aceptación definitiva: 16-08-2012

Page 4: Base de datos EBSCO

Gestión de los servicios de tecnologías de la información: modelo de aporte de valor basado en ITIL e ISO/IEC 20000

El profesional de la información, 2013, enero-febrero, v. 22, n. 1. ISSN: 1386-6710 55

Bauset-Carbonell, María-Carmen; Rodenes-Adam, Manuel (2013). “Gestión de los servicios de tecnologías de la infor-mación: modelo de aporte de valor basado en ITIL e ISO/IEC 20000”. El profesional de la información, enero-febrero, v. 22, n. 1, pp. 54-61.

http://dx.doi.org/10.3145/epi.2013.ene.07

1. Gestión de servicios de TILos servicios de tecnologías de la información (TI) son cada vez más complejos, se incrementan sus niveles regulatorios, se producen frecuentes desviaciones en tiempo o en cos-tes en su ciclo de vida, continuos avances tecnológicos, etc., todo lo cual hace su gestión más necesaria para que sigan siendo eficientes, pero a la vez más compleja. Si la gestión es eficaz se consigue que los cambios se adapten proactiva-mente a la estrategia del negocio.

La Office of Government Commerce del Reino Unido (OGC 2009) define la gestión de servicios como un conjunto de capacidades organizativas especializadas que proporcionan valor a los clientes en forma de servicios. Las capacidades son funciones y procesos para gestionar servicios durante un ciclo de vida, con especializaciones en estrategia, diseño, transición, operación y mejora continua.http://www.ogc.gov.uk

La gestión de servicios transforma recursos en servicios de valor, pues los recursos por sí mismos tendrían un valor in-trínseco relativamente bajo para los clientes. Los servicios proveen valor a los clientes y facilitan lograr sus objetivos a menor coste y menos riesgos, pues la responsabilidad la asume la empresa contratada (OGC, 2009).

La tendencia a externalizar y compartir ha hecho incremen-tar el número de proveedores de servicios. Un caso parti-cular es el de las unidades organizativas internas que dan servicios a otras unidades de la misma organización, que es el que se tratará aquí.

Piattini y Hervada (2007) destacan que la experiencia ha demostrado que la calidad en el nivel de servicio no es algo que se pueda obtener únicamente con fuertes inversiones en tecnología o personal altamente cualificado, sino que es el resultado de una buena gestión y planificación a nivel em-presarial. Es necesario implantar un sistema de gestión de servicios de TI (SGSIT), potenciar la labor de los gestores y utilizar métricas para el seguimiento y control del progreso.

2. Valor aportado por un SGSITEste artículo se basa en la tesis doctoral de Bauset (2012), focalizada en analizar el aporte de valor de la implantación de un SGSIT que aplica la norma ISO/IEC 20000 para una organización.

Para ello era necesario disponer de métricas e indicadores, pues como dice Steinberg (2006): “Si no mides, no puedes gestionar, si no mides, no puedes mejorar”, y el presente artículo lo que hace precisamente es describir un conjunto de indicadores que han sido contrastados empíricamente.

Marco teórico de la investigación

La exploración bibliográfica tiene dos partes:

- Un análisis más tradicional para identificar los citados in-dicadores.

- Otro más innovador para identificar en qué medida la ges-tión de las TI influye en el aporte de valor (variable depen-diente) y desde qué dimensiones (variables independien-tes del modelo).

2.1. Variable dependiente

Los autores más destacados que han servido de referencia para medir el aporte de valor son:

Applegate (1995). Al final de las que llama cuatro etapas en la aplicación de las TI pone de relieve las tres formas en las que pueden aportar valor en la empresa:

- Mejorando el rendimiento de los procesos. Este factor se ha incorporado en el modelo para analizar el aporte de valor.

- Mejorando la productividad individual y calidad de las de-cisiones

- Incorporando ventajas competitivas al negocio principal.

Pérez (2005). En su tesis doctoral destaca la implantación de un SGSIT como medio para mejorar el aporte de valor a una organización. Coincide con nuestra pregunta del trabajo de investigación.

McNaughton, Ray y Lewis (2010). Presentan un marco de trabajo para evaluar las mejoras que puede aportar un sis-tema de gestión de servicios de TI enfocado a ITIL desde 4 perspectivas: gestión, tecnología, usuarios de TI y emplea-dos de TI. Es desde este último punto de vista desde el que se analiza el aporte de valor en este trabajo.

Kaplan y Norton (1996). Consideran como una de las mejo-res prácticas para medir el rendimiento de TI y analizar su aporte de valor la utilización de cuadros de mando (score cards), que recomiendan deben incluir las siguientes pers-pectivas e indicadores:

- Usuario: Indicadores que permitan al usuario evaluar las TI.

- Operacional: Indicadores de procesos de TI requeridos para garantizar el desarrollo y entrega de aplicaciones. Es este punto de vista el que se ha analizado en el presente trabajo de investigación.

- Orientación futura: Indicadores relacionados con la inno-vación, haciendo uso de recursos humanos y tecnológicos que permitan entregar los servicios a tiempo.

- Orientación al negocio: Indicadores que permitan medir la alineación de los servicios de TI con las necesidades del negocio.

San-José; Mata y Olalla (2012). Destacan que para que las TI sean eficientes en costes, aportando valor, hay que foca-lizarse en: gestión de los niveles de servicio, gestión de la demanda, capacidad, disponibilidad y control de los activos.

Page 5: Base de datos EBSCO

María-Carmen Bauset-Carbonell y Manuel Rodenes-Adam

56 El profesional de la información, 2013, enero-febrero, v. 22, n. 1. ISSN: 1386-6710

Figura 1. Sistema de gestión de la calidad de servicios de TI según ISO 20000. Fuente: Van et al. (2008)

Miñana-Terol (2001), Strassman (1997) y Bullon (2009). Relacio-nan el aporte de valor de los servicios de TI con la inversión y gasto. Esta variable no se ha considerado porque en la orga-nización objeto de estudio no se dispone de un modelo de costes por servicio.

Luftman, Papp y Brier (1999) y Chen et al. (2010). Relacionan el aporte de valor con la alineación entre la estrategia del negocio y la estrategia de la gestión de los servicios de TI. Este punto de vista no lo hemos considera-do porque tal y como se ha co-mentado en el párrafo anterior las personas encuestadas en la organización objeto estudio obedecen a un perfil técnico al-tamente cualificado con el rol de responsables de servicios de TI, considerados como empleados de TI por McNaughton, Ray y Lewis (2010).

2.2. Variables independientes

En este apartado se describen los marcos de trabajo de la gestión de TI y los modelos que han servido de base para identificar los indicadores asociados a las variables indepen-dientes.

Se ha seleccionado la UNE-ISO/IEC 20000 Tecnología de la información. Sistema de gestión del servicio (SGS) por ser la norma nacional española en la que una organización puede certificar la gestión de los servicios de TI (figura 1).

Las variables independientes del modelo están relacionadas directamente con los 5 bloques en los que la norma agrupa los procesos: provisión del servicio, control, entrega, resolu-ción y relaciones con el negocio y suministradores. A su vez estos procesos concuerdan con las fases de un ciclo de vida del servicio que define ITIL: estrategia, diseño, transición, operación y mejora continua.

En la praxis ISO 20000 e ITIL suelen combinarse, tal y como se ha podido comprobar en la organización objeto de es-tudio. ITIL se considera un estándar mundial de facto en la gestión de servicios informáticos aplicable a cualquier mo-delo empresarial.

Para detectar si una vez certificada la organización según la norma ISO 20000 surgían desviaciones que pudieran po-ner en riesgo la integración de los procesos en la operativa diaria de la organización, nos hemos apoyado en el modelo DICE [(D)uration of the project, performance (I)ntegrity of the team; organizational (C)ommitment to change, additio-nal (E)ffort] del Boston Consulting Group (Sirkin, Keenan y Jackson, 2005).

Para analizar el grado de madurez de los procesos implanta-dos se utilizó CMMI (Capability maturity model integration),

modelo de madurez para ingeniería del software y otras dis-ciplinas SEI (2010).

Los trabajos de Steinberg (2006) y Bauset (2010) se han uti-lizado para identificar indicadores de los procesos de ges-tión de la norma ISO 20000.

3. Modelo e hipótesis

Las variables independientes consideradas están relaciona-das con los 13 procesos de la norma ISO/IEC 20000 agrupa-dos en 5 bloques (figura 2):

A) Eficiencia en la provisión del servicio desde el punto de vista de la capacidad, disponibilidad, seguridad y continui-dad.

B) Eficiencia en el mantenimiento de los servicios, apoyán-donos en aspectos clave como son la resolución de inciden-tes y problemas.

C) Nivel de control sobre los servicios desde el punto de vis-ta de la gestión de los cambios; se abordan aspectos como analizar si los cambios afectarán a la operativa del negocio, y la gestión de la configuración, incluyendo el inventariado de los activos de TI que forman los servicios.

D) Eficiencia en las relaciones con los proveedores y clientes: procesos de relaciones con el negocio y suministradores.

E) Eficiencia en la gestión de entregas apoyándonos en el proceso de la norma que lleva el mismo nombre.

A continuación se describen las 7 hipótesis de partida H1, H2…, que analizarían la relación (f = función de) entre las citadas variables independientes y el aporte de valor (Valor TI) a la organización:

H1, Valor TI = f(A)Basada en los procesos de la norma asociados a la fase de

Page 6: Base de datos EBSCO

Gestión de los servicios de tecnologías de la información: modelo de aporte de valor basado en ITIL e ISO/IEC 20000

El profesional de la información, 2013, enero-febrero, v. 22, n. 1. ISSN: 1386-6710 57

provisión del servicio, modelo ITSM de Steinberg (2006), li-bros oficiales del estándar de ITIL e ISO 20000.

H2, Valor TI = f(B)Basada en el modelo Bauset (2010) en el que se pudo con-trastar exploratoriamente una relación directa entre ambas variables relacionadas con la gestión de incidencias, y tam-bién en el modelo de Steinberg (2006).

H3, Valor TI = f(C)Basada en los indicadores de los procesos del modelo ITSM de Steinberg (2006).

H4, Valor TI = f(D)Basada en el modelo Bauset (2010) en el que se pudo con-trastar exploratoriamente una relación directa entre ambas variables que hacían referencia a la adaptación a los cam-bios solicitados por el negocio incluidas las adaptaciones de tecnología.

H5, Valor TI = f(E)Basada en los indicadores del modelo de Steinberg (2006).

H6, B = f(C)Basada en el modelo Bauset (2010) en el que se pudo con-trastar empíricamente una relación directa entre la variable de control “cambios” y las operacionales “tiempo de resolu-ción de incidentes y nº de incidencias y peticiones”.

H7, E = f(C)Basada en los procesos de gestión de entregas y cambios de la norma ISO 20000, que describe que una entrega conlleva implícitamente un cambio.

Finalmente se quiso contrastar la posible relación del Apor-te de valor con los siguientes factores:

- gestión eficiente de la provisión del servicio;- gestión eficiente del mantenimiento del servicio;

Figura 2. Modelo inicial propuesto del valor de la implantación de la norma ISO/IEC 20000 en función de 5 variables

- nivel de control sobre los servicios;- gestión eficiente de las relaciones con proveedores y

clientes;- gestión eficiente de la entrega de los servicios.

4. MetodologíaEl trabajo de campo se realizó sobre los 95 servicios de TI que presta la dirección de sistemas internos de Indra a toda la organización, siendo encuestados los 95 responsables de servicio. Indra es una consultora tecnológica de ámbito in-ternacional con más de 40.000 empleados (2011).

Datos asociados a la determinación de la muestra:- Tamaño: 88 servicios de TI- Intervalo de confianza: 99,9 %- Error muestral: 3,75%

Análisis del grado éxito de la implantación SGSIT

Se aplicó el modelo de Sirkin et al. (2005) obteniendo un resultado satisfactorio. Se identificaron aspectos clave como “realizar seminarios grupales a los responsables de servi-cio”, para despejar dudas y mejorar su eficiencia al aplicar los procesos.

Análisis de la madurez de los procesos

Se utilizó como marco de referencia CMMI. El período obje-to de estudio incluyó un ejercicio anual de junio de 2010 a junio de 2011. El análisis fue determinante para descartar el proceso de la gestión de entregas ya que aunque estaba de-finido, aún no se aplicaba con suficiente nivel de madurez.

Diseño del cuestionario

Se estructuró en 5 bloques y se realizó un piloto con el ob-jetivo de depurarlo, seleccionando 5 responsables de servi-cios de distintas direcciones de sistemas internos.

Page 7: Base de datos EBSCO

María-Carmen Bauset-Carbonell y Manuel Rodenes-Adam

58 El profesional de la información, 2013, enero-febrero, v. 22, n. 1. ISSN: 1386-6710

Recogida de datos

Incluyó la selección de los encuestadores, administración de las encuestas y contrastación de los datos recibidos. Se reali-zó en junio-julio de 2011.

Los indicadores fueron refrendados por la dirección de sis-temas, la comunidad de expertos formada por los responsa-bles de los procesos SGSIT, la directora de seguridad de sis-temas internos y la responsable del SGSIT, seleccionándose finalmente los que se listan en la tabla 1. Los que se inician por VD son indicadores de la variable dependiente, y por VI de las variables independientes.

5. Resultados de la investigación

Se realizó un análisis descriptivo y multivariante aplicando técnicas de regresión lineal y curvilínea para identificar las variables más influyentes, y un análisis de caminos para de-terminar las influencias directas e indirectas entre las varia-bles.

A continuación se incluye un diagrama ilustrativo con los re-sultados obtenidos:

- Cada flecha muestra el valor del coeficiente de regresión

CÓDIGO NOMBRE Y CÁLCULO

VD_USABILIDAD_POTENCIAL Usabilidad. Indicador objetivo para medir cuáles son los servicios con mayor/menor demanda

VD_USABILIDAD_CONCURRENTE Usabilidad. Indicador objetivo para medir cuáles son los servicios con mayor/menor demanda

VD_CALIDAD Nivel de calidad del servicio, uno de los objetivos citado de la norma

VD_MEJORAS Nº de mejoras incorporadas, nuevas funcionalidades o ampliaciones del servicio

VI1_CAPACIDAD Nº usuarios máximos concurrentes que admite el servicio sin que implique una degradación

VI1_OCUPACION VD_USABILIDAD_CONCURRENTE*100/Vl1_CAPACIDAD

VI1_DISPONIBILIDAD % disponibilidad del servicio [(Nº horas posibles - Nº horas no disponibles) x 100 ]/ Nº horas posibles

VI1_SEGURIDAD Grado en el que se han considerado los requisitos de seguridad de la información

VI1_CONTINUIDAD Tiempo estimado de restablecimiento del servicio ante una caída total

VI2_PETICIONES Nº peticiones del servicio en el período. Indicador objetivo relacionado con la demanda

VI2_INCIDENTES_TOTALES Nº total de incidencias en el período

VI2_INCIDENTES_NIVEL3 Nº de incidencias de nivel 3 en el período

RATIO _RESOL_INC_N3 VI2_INCIDENTES_NIVEL3*100/VI2_INCIDENTES_TOTALES

VI2_CRITICOS Nº de incidencias criticas

VI2_TPOINCT Tiempo promedio empleado en resolución incidentes

VI2_TPOINCE Tiempo promedio empleado por el técnico en resolución incidentes

VI2_PORCENTAJETPOE Vl2_TPOINCE*100/Vl2_TPOINCT

VI2_REDUCCIONINC % reducción de incidentes

VI3_CISPERIODO Número de CI's por servicio registrados en el periodo

VI3_CIS Número total de CI's registrados en CMDB asociados al servicio

VI3_CMDB Precisión de la información de la CMDB

VI3_CAMBIOSNOREG Vl3_CISPERIODO - Vl3_CAMBIOS

VI3_CAMBIOS Número de cambios asociados al servicio

VI3_PRUEBAS Plan de pruebas de cambios

VI3_REPROGRAMADOS Nº de cambios reprogramados, por un fallo o por faltar algún componente que no previsto

VI4_PROVEEDORES1 Grado de cumplimiento del los acuerdos contractuales por parte de los proveedores

VI4_PROVEEDORES2 Nº de objetivos contractuales que están alineados con las necesidades del servicio

VI4_SATISFACCION Grado de satisfacción del cliente con el servicio

Tabla 1. Indicadores seleccionados

estandarizado correspondiente y cada variable tiene aso-ciado el porcentaje en que es explicada por las variables independientes relacionadas (R2 corregida).

- Las flechas de color negro representan las relaciones del modelo teórico de partida, descrito en el apartado an-terior, que hemos podido contrastar que han sido todas excepto H5, H6 y H7. Comentar que H5 y H7 estaban rela-cionadas con la variable gestión de entregas y tal como se explicó se descartó por el nivel de madurez alcanzado.

- Las flechas de color azul son las nuevas relaciones resul-tantes que se han obtenido.

- Además de la regresión lineal, en algunas variables se ha analizado la regresión curvilínea para ver si mejoraba la R2 corregida.

El aporte de valor se ha analizado desde dos dimen-siones:

1) Servicios con mayor número de mejoras incorporadas.

Los factores contrastados que influyen directamente sobre el aporte de valor son:- Tiempo de resolución de incidentes utilizado por los res-

ponsables de servicio.- Gestión adecuada de los cambios.

Page 8: Base de datos EBSCO

Gestión de los servicios de tecnologías de la información: modelo de aporte de valor basado en ITIL e ISO/IEC 20000

El profesional de la información, 2013, enero-febrero, v. 22, n. 1. ISSN: 1386-6710 59

Figura 3. Modelo de regresión lineal para la variable Valor de los servicios de TI (* p < 0,05; ** p < 0,01; # p < 0,10)

Estas variables explican el 64,3 % de la variable VD_MEJO-RAS.

2) Servicios con mayor factor de uso y calidad: los factores contrastados que influyen directamente sobre el aporte de valor son:- Disponibilidad, continuidad y capacidad.- Gestión adecuada de los cambios.- Satisfacción del usuario.

Estas variables explican el 82,6% de la variable VD_USO/CALIDAD.

Se observa una gran influencia de las variables independien-tes sobre las variables dependientes, lo que dota al modelo de mucha utilidad y calidad.

El análisis se completó tratando de identificar posibles re-laciones indirectas, evaluando las dependencias entre las variables del primer nivel y el resto. Así, en un segundo ni-vel, se comprobó que los servicios con mayor número de incidencias son los que más cambios realizan, detectándose que en la fase de provisión no consideraron aspectos de di-seño de la capacidad, disponibilidad y continuidad del ser-vicio.

También se observó que los servicios que tienen menos ín-tegra la base de datos de la gestión de la configuración o CMDB (configuration manager database) realizan más cam-bios.

Finalmente en el segundo nivel se obtiene que los usuarios están más satisfechos en los servicios donde se reduce el

número de incidentes.

A un tercer nivel se observa que para reducir el número de incidentes influyen el grado de cumplimiento de requisitos de seguridad, y si los proveedores cubren las necesidades del servicio.

Hipótesis contrastadas empíricamente:

H1, Valor TI (servicios mayor uso y calidad) = f(A: disponibi-lidad, continuidad, capacidad)

H2, Valor TI (mejoras)= f(B: tiempos resolución incidentes nivel 3)

H3, Valor TI (servicios mayor uso y calidad, con mejoras) = f(C: cambios)

H4, Valor TI (servicios mayor uso y calidad) = f(D: satisfac-ción)

Se ha rechazado la hipótesis H6, ya que se ha obtenido la re-lación inversa que se planteaba originalmente y será objeto de estudio de futuras revisiones del modelo.

Finalmente se ha podido contrastar la siguiente relación:

Valor servicios TI = f (A: disponibilidad, continuidad, capaci-dad; B: tiempo resolución incidentes nivel 3; C: cambios; D: satisfacción).

6. Comparativa de resultados con otros modelosLa eficiencia en el mantenimiento de los servicios (B), focali-zándose en optimizar el tiempo de resolución de incidentes

Page 9: Base de datos EBSCO

María-Carmen Bauset-Carbonell y Manuel Rodenes-Adam

60 El profesional de la información, 2013, enero-febrero, v. 22, n. 1. ISSN: 1386-6710

tal y como hemos podido comprobar es uno de los aspectos a considerar para aportar valor a la organización.

Este indicador también se contrastó que aportaba valor en el modelo exploratorio de aporte de valor de Bauset (2010). Es utilizado por Viñas (2011), de la consultora Enzyme, en el modelo de aporte de valor que se expuso en el VI Congreso nacional de itSMF.

Otro de los indicadores representativos que se ha podido contrastar es la eficiencia en la provisión del servicio (A), en-fatizando en la disponibilidad de los servicios críticos, que incluyen indicadores relacionados con la capacidad, dispo-nibilidad, y continuidad de los servicios. Factores todos ellos recogidos en la fase de diseño de ITIL del ciclo de vida del servicio.

Este indicador también se utiliza en el modelo de Viñas (2011) acotado a la disponibilidad de los servicios críticos.

El tercer indicador contrastado ha sido la satisfacción del usuario, uno de los aspectos que se analizan en el proceso de relaciones con el negocio enmarcado en la fase de estra-tegia de ITIL. También está recogido en el modelo de Kaplan y Norton (1996).

Kaplan y Norton (2001) consideran los activos intangibles como la mayor fuente de ventaja competitiva para una or-ganización. La satisfacción del usuario se considera que for-maría parte de dichos activos.

Por último el cuarto indicador contrastado que influye so-bre el aporte de valor es el control de los servicios (C), hace referencia al inventariado de activos y gestión eficiente de los cambios. El control de los servicios está enmarcado en la fase de Transición de los servicios de TI según ITIL, inclu-yendo los procesos de gestión del cambio y gestión de la configuración.

La precisión de la CMDB (configuration management data-base), un indicador de control que comprueba que la base de datos de activos de los servicios esté íntegra, también se contrastó en el modelo exploratorio de Bauset (2010) que era un indicador representativo del aporte de valor.

Conclusiones, limitaciones y evoluciónEl modelo presentado puede ser una guía de referencia para aquellas organizaciones que ya tienen implantado un SGSIT y necesitan medir el aporte de valor con la gestión de ser-vicios de TI.

Tras la contrastación empírica, se ha comprobado que en una organización en la que se ha implantado un SGSIT, se aporta valor influyendo directamente en los siguientes as-pectos:

La gestión eficiente de la provisión del servicio desde el pun-to de vista de la disponibilidad, continuidad y capacidad, procesos relacionados con la Fase de Diseño de ITIL.

El nivel de control de los servicios desde el punto de vista de la gestión de cambios, enmarcado en la Fase de transición de ITIL.

La gestión eficiente del mantenimiento de los servicios, me-jorando tiempos de resolución incidentes, enmarcado en

Fase de operación de ITIL.

La gestión eficiente de las relaciones con los clientes y su satisfacción, enmarcado en la Fase de estrategia ITIL.

Es de destacar que los aspectos influyentes en el aporte de valor representan a todas las fases del ciclo de vida de un servicio tal y como lo define ITIL.

Como limitaciones del modelo comentar que no se han po-dido considerar indicadores del proceso de entregas de la norma ISO/IEC 20000 por su bajo nivel de madurez tras la implantación del sistema de gestión, lo cual no ha permitido contrastar las hipótesis 5 y 7.

En algunos indicadores, como el número de usuarios concu-rrentes, se ha detectado un elevado número de casos per-didos, debido a que en todos los servicios no se dispone de herramientas para la obtención de dicha información.

Con la información obtenida del modelo, y como continua-ción del presente trabajo de investigación, se propone reali-zar un cuadro de mando (score card) que incluya los indica-dores influyentes.

También, aplicar el modelo en otro tipo de organizaciones, a ser posible de diferente tamaño y sector para ir mejoran-do el modelo, e incluir otros puntos de vista como el de los usuarios o clientes, como indican Kaplan y Norton (1996). Esta última ampliación propuesta es de gran interés y pre-cisaría llevar a cabo la investigación de un modelo comple-mentario adicional.

Finalmente comentar que otras líneas de investigación po-drían analizar en qué medida la implantación de un sistema de gestión de la seguridad de la información, aporta valor a la organización, tema que consideramos de interés por los resultados obtenidos (en los que se ha podido comprobar el aporte de valor de los requisitos de seguridad) y por la rela-ción entre las dos normas ISO 27001 e ISO 20000.

BibliografíaAenor (2011). Tecnología de la información. Sistema de gestión del servicio (SGS). Parte 1: Requisitos. UNE-ISO/IEC 20000-1:2011. Norma española elaborada por el comité técnico AEN/CTN 71. Madrid: AENOR, 2011.

Applegate, Lynda (1995). Designing and managing the in-formation age: organitational challenges and opportunities. Harvard Business School Press.

Bauset-Carbonell, María-Carmen (2010). El aporte de valor de las TIC en las organizaciones: desarrollo de un modelo de diagnosis basado en métricas que proporciona ITIL v3. 150 pp. Trabajo de Investigación desarrollado en la UPV para la obtención del diploma de estudios avanzados.

Bauset-Carbonell, María-Carmen (2012). Modelo de apor-te de valor de la implantación de un sistema de gestión de servicios de TI (SGSIT), basado en los requisitos de la norma ISO/IEC 20000. Tesis doctoral. Universitat Polietècnica de València. 289 pp. http://riunet.upv.es/handle/10251/16546

Bullon, Luis A. (2009). “Competitive advantage of operatio-nal and dynamic information technology capabilities”. Jour-

Page 10: Base de datos EBSCO

Gestión de los servicios de tecnologías de la información: modelo de aporte de valor basado en ITIL e ISO/IEC 20000

El profesional de la información, 2013, enero-febrero, v. 22, n. 1. ISSN: 1386-6710 61

nal of centrum cathedra, marzo, v. 2, n. 1, pp. 86-107.http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1805940

Chen, Daniel Q.; Mocker, Martin; Preston, David S.; Teu-bner, Alexander (2010). “Information systems strategy: reconceptualization, measurement and implications”. MIS quarterly, junio, v. 34, n. 2, pp. 233-259.

Hamel, Gary; Prahalad, C. K. Competing for the future. Nue-va edición. Editor: Harvard Business School Press. Marzo 1996. 384 p. ISBN: 0875847161

Hitt, Lorin; Brynjolfsson, Erik (1996). “Productivity, business profitability, and consumer surplus: three different measu-res of information technology value”. MIS quarterly, junio, v. 20, pp.121-142.http://www.jstor.org/discover/10.2307/249475?uid=2129&uid=2&uid=70&uid=4&sid=21101484208933

IT Governance Institute (2005). Measuring and demonstra-ting the value of IT. Printed in the United States of America. 25 p. ISBN: 1 933284 12 9http://www.isaca.org/Knowledge-Center/Research/ResearchDeliverables/Pages/IT-Governance-Domains-P r a c t i c e s - a n d - C o m p e t e n c i e s - M e a s u r i n g - a n d -Demonstrating-the-Value-of-IT.aspx

Kaplan, Robert; Norton, David (1996). The balanced score-card: translating strategy into action. ISBN: 978 0875846514

Luftman, Jerry; Papp, Raymond; Brier, Tom (1999). “Enablers and inhibitors of business-IT alignment”. Commu-nications of AIS, v. 1, art. 11, págs. 1-33.http://teaching.fec.anu.edu.au/BUSN7040/Articles/luftman%20et%20al%201999%20bus-IT%20alignment.pdf

Lluís-Viñas, Alberto (2011). “¿Qué aporta TI al negocio? 5 métricas para medir el aporte de valor aportado”. En: VI Congreso Nacional itSMF España, (Madrid, 24-25 octubre 2011), sesión [SD.06-GO], pp. 1-25.

Mcnaughton Blake E.; Ray, Pradeep; Lewis, Lundy (2010). “Designin and evaluation framework for IT service manage-ment”. Information and management, v.47, n. 4, pp. 219-225.http://dx.doi.org/10.1016/j.im.2010.02.003

Miñana-Terol José-Luis (2001). Desarrollo de un modelo que permita el diagnóstico en la aportación de valor de la infraestructura de Tecnologías de la Información (TI). Tesis doctoral. Universidad Politècnica de València. 290 p.

Nolan Richard (1994). Estimating the value of the IT Assets. Harvard Business School, Boston.

OGC (2009). ITIL v3- Estrategia del servicio. 1ª publicación. Reino Unido: TSO (The Stationery Office), 284p. ISBN: 978 0

11 331158 3

OGC (2009). ITIL v3- Diseño del servicio. 1ª publicación. Rei-no Unido: TSO (The Stationery Office), 337p. ISBN: 978 0 11 331226 9

OGC (2009). ITIL v3- Transición del servicio. 1ª publicación. Reino Unido: TSO (The Stationery Office), 270p. ISBN: 978 0 11 331227 6

OGC (2009). ITIL v3- Operación del servicio. 1ª publicación. Reino Unido: TSO (The Stationery Office), 286p. ISBN: 978 0 11 331150 7

OGC (2009). ITIL v3- Mejora continua del servicio. 1ª pu-blicación. Reino Unido: TSO (The Stationery Office), 286p. ISBN: 978 0 11 331150 7

Pérez, Daniel (2005). Contribución de las tecnologías de la información a la generación de valor en las organizaciones: un modelo de análisis y valoración desde la gestión del co-nocimiento, la productividad y la excelencia en la gestión. Tesis doctoral. Universidad de Cantabria. Departamento de Administración de empresas. ISBN: 8469006665 http://www.tdx.cat/handle/10803/10587

Piattini-Velthuis, Mario; Hervada-Vidal, Fernando (2007). Gobierno de las tecnologías y los sistemas de información. Madrid: Ediciones RA-MA, 2007. 456 p. ISBN: 978 84 7897 767 3.

San-José, Cristina; Mata, Montserrat; Olalla, Beatriz (2012). “Puntos clave en la eficiencia en costes en las TI”. itSMF Ser-vice Talk, abril, pp. 18-19.

SEI (Software Engineering Institute) (2010). CMMI for ser-vices version 1.3. CMMI-SVC, v 1.3. Improving processes for providing better services. Technical Report. November. h t t p : / / r e p o s i t o r y . c m u . e d u / c g i / v i e w c o n t e n t .cgi?article=1278&context=sei

Sirkin, Harold L.; Keenan, Perry; Jackson, Alan (2005). “Hard side of change management”. Harvard business re-view, Oct 01, 13 p.

Steinberg, Randy (2006). Measuring ITIL, measuring, repor-ting and modeling - the IT service management metrics that matter most to IT senior. Canada: Ediciones Trafford, 154p. ISBN: 1 4120 9392 9

Strassman, Paul (1997). The squandered computer. Evalua-ting the business alignment of information technologies. In-formation economics press, New Canaan CT. 413 p. ISBN: 0 9620413 1 9

Van-Bon, Jan; Polter, Selma; Verheijen, Tieneke; Pieper, Mike (2008). ISO/IEC 20000. Una introducción. Quint We-llington Redwood (traductor). Primera edición. Holanda: Publicación de Van Haren, 242pp. ISBN: 978 90 8753 293 2

Page 11: Base de datos EBSCO

Copyright of El Profesional de la Información is the property of EPI SCP and its content maynot be copied or emailed to multiple sites or posted to a listserv without the copyright holder'sexpress written permission. However, users may print, download, or email articles forindividual use.

Page 12: Base de datos EBSCO

3

Organizational Social Structures for Software Engineering

DAMIAN A. TAMBURRI, PATRICIA LAGO, and HANS VAN VLIET, VU University Amsterdam

Software engineering evolved from a rigid process to a dynamic interplay of people (e.g., stakeholders ordevelopers). Organizational and social literature call this interplay an Organizational Social Structure(OSS). Software practitioners still lack a systematic way to select, analyze, and support OSSs best fittingtheir problems (e.g., software development). We provide the state-of-the-art in OSSs, and discuss mechanismsto support OSS-related decisions in software engineering (e.g., choosing the OSS best fitting developmentscenarios). Our data supports two conclusions. First, software engineering focused on building softwareusing project teams alone, yet these are one of thirteen OSS flavors from literature. Second, an emergingOSS should be further explored for software development: social networks. This article represents a firstglimpse at OSS-aware software engineering, that is, to engineer software using OSSs best fit for the problem.

Categories and Subject Descriptors: D2.9 [Software Engineering]: Management—Software process models

General Terms: Management, Human Factors

Additional Key Words and Phrases: Organizational social structures, social context, users, cultural impli-cations, social adaptivity, user perspective, information trust, knowledge management, governance, organi-zational decision-making, software organizations, social networks, social structures, communities, softwarepractice

ACM Reference Format:Tamburri, D. A., Lago, P., and van Vliet, H. 2013. Organizational social structures for software engineering.ACM Comput. Surv. 46, 1, Article 3 (October 2013), 35 pages.DOI: http://dx.doi.org/10.1145/2522968.2522971

1. INTRODUCTION

1.1. Vision and Goals

Social interaction has evolved as a consequence of globalization [Martinelli 2007]. Theimpact of this evolution is deep and spread to all aspects of society (infrastructure,services, etc.) [Langhorne 2001]. For example, dynamic and unpredictable globaldemands in businesses transformed single supply chains into value-creating supplynetworks of corporate partnerships [Jetter et al. 2009]. Moreover, cloud computingrendered software and data increasingly pervasive, inexpensive, and globally accessi-ble [Armbrust et al. 2010; Kshetri 2010]. Previous research also suggests the keyword“social” is rapidly growing in interest for software engineering [Chard et al. 2010;Herbsleb and Mockus 2003]. Indeed, from a social perspective, software engineers,stakeholders, and end-users form an Organizational Social Structure (OSS). Literatureshows that quality support to developers’ OSS influences project success [Cataldo andNambiar 2009a; Cataldo et al. 2009] and final product quality [Nagappan et al. 2008;Cataldo and Nambiar 2009b, 2012]. Better OSS support would ease, for example, earlydetection of socio-technical incongruences (e.g., software risks) [Cataldo et al. 2008;

Authors’ addresses: D. A. Tamburri (corresponding author), P. Lago, and H. van Vliet, VU University Ams-terdam, De Boelelaan 1081a, The Netherlands; email: [email protected] to make digital or hard copies of part or all of this work for personal or classroom use is grantedwithout fee provided that copies are not made or distributed for profit or commercial advantage and thatcopies show this notice on the first page or initial screen of a display along with the full citation. Copyrights forcomponents of this work owned by others than ACM must be honored. Abstracting with credit is permitted.To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of thiswork in other works requires prior specific permission and/or a fee. Permissions may be requested fromPublications Dept., ACM, Inc., 2 Penn Plaza, Suite 701, New York, NY 10121-0701 USA, fax +1 (212)869-0481, or [email protected]© 2013 ACM 0360-0300/2013/10-ART3 $15.00

DOI: http://dx.doi.org/10.1145/2522968.2522971

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 13: Base de datos EBSCO

3:2 D. A. Tamburri et al.

Fig. 1. Key concepts within our study and their relations.

Lyytinen et al. 1998]. However, in the discipline of software engineering there is stilllittle understanding on OSSs. Our goal in this work is to present the state-of-the-artin OSSs and provide instruments allowing practitioners to identify, select, analyse, orsupport the exact social structure they need, or will be part of, as software engineers.We found that each OSS type can be modelled by a set of distinctive attributes andtheir dependencies. These models allow reasoning on OSS types, to evaluate theirfitness for software development purposes. These purposes include: (a) in softwareprocess monitoring, analyse the current OSS status (i.e., attribute values) for timelydetection of socio-technical incongruences; (b) in postmortem analysis, identify theOSS attributes that led to failure, to define best practices.

We discuss the state-of-the-art in OSS and its implications, by referring to real-lifedevelopment scenarios or previous and related research in software engineering.

1.2. Terminology

An OSS is the set of interactions, patterned relations, and social arrangements emerg-ing between individuals part of the same endeavour [Wenger et al. 2002]. To clarifythe critical concept of OSSs, let us consider Global Software Engineering (GSE) as anexample. GSE is a software engineering practice entailing project teams spread acrossmany timezones, for example, to achieve round-the-clock productivity [Herbsleb andMockus 2003]. The people taking part in GSE process necessarily interact with others(stakeholders, colleagues, superiors, etc.) during this process. The emerging web of re-lations, dependencies, collaborations, and social interactions they will be part of is aset of nontrivial patterns (therefore a “structure”) which entails people (therefore “so-cial”). These people act as an organised whole, focusing on a particular goal (therefore“organizational”).

Figure 1 provides an overview of our contributions to define OSS and support OSS-related decisions. In what follows, in italic, we explain each contribution. Univocalidentification of OSSs is possible through a single DifferentiatingAttribute for everytype. Also, each type is further characterised by DefiningAttributes, whose value isunique to it. In addition, we provide means to determine which attributes are influencedby the selection of a certain OSS type (i.e., through a Graph that plots dependenciesbetween differentiating attribute and others). Moreover, we offer a set of patternsthrough which key OSS types can be combined (i.e., through an OSS TransitionSystem),and according to which certain OSS types can transit to other types, under certaincircumstances (e.g., some attributes change value). Finally, we offer a set of attributes(and their dependencies) which are applicable (yet not required) to all OSSs. These areGenericAttributes that can further configure the chosen OSS to fit exactly the domainand problem at hand.

1.3. Structure of the Article

The rest of the article is structured as follows: Section 2 compares our study to otherswhich investigate OSS attributes and characteristics in practice, to understand their

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 14: Base de datos EBSCO

Organizational Social Structures for Software Engineering 3:3

impact. Section 3 explains the methods we used to attain our results as well as theprimary studies we analysed. Section 4 presents our results. Section 5 discusses re-sults usage, implications, and threats to validity. Section 6 concludes the article. TheAppendix contains links to online material.

2. STUDYING OSS IN SOFTWARE ENGINEERING CONTEXTS

There are two immediate applications of our contributions in practice: (a) supportingsoftware engineering research in the study of OSSs in practice; (b) supporting softwareengineering practitioners in OSSs-related decisions. To show the relevance of theseactivities, we investigated related work in software engineering. Looking at top con-ferences in software engineering (ICSE and FSE) and data mining, we found manystudies that implicitly study OSSs.

First of all, studies on socio-technical congruence [Cataldo et al. 2008] could benefitfrom our contributions. Socio-technical congruence is the degree to which technical andsocial dependencies match, when coordination is needed [Cataldo et al. 2009]. Authorsin Cataldo et al. [2009] present socio-technical congruence in formal terms and empir-ically investigate its impact on product quality. Similar works (e.g., Bird et al. [2009b]and Kwan et al. [2011]) strongly motivate the study of OSSs and their influence onsocio-technical congruence. In Bird et al. [2009b] authors use social network analysisto investigate coordination among groups of developers with socio-technical dependen-cies. In our work we do not focus on such specific aspects of OSSs (e.g., member socialdependencies [Kwan et al. 2011] for socio-technical congruence). Our scope is limited toproviding a clear picture of what types of OSSs can be encountered, for example, whilestudying socio-technical congruence. This could help in understanding incongruences,their impact (e.g., through OSS attributes dependencies) and finding appropriate pre-ventive governance (e.g., if formality of structure inhibits collaborativeness, changingcommunity type may help loosen formality during software development).

In addition, organizational decision-making could benefit from our contributions. Forexample, offshoring is an organizational decision. Understanding if the current orga-nizational layout of a company is performant (or even compatible) with offshoring isessential to “go global” [Cusick and Prasad 2006; Cataldo and Nambiar 2009a]. Or-ganizational decisions change the OSS layout, and can be used to plan and supportoffshoring [Cataldo and Nambiar 2012]. We analyzed evidence in Cusick and Prasad[2006] and found that authors are truly suggesting that offshoring attempts shouldassume a “Formal Network” OSS type. In addition local sites should be governed bymeans of “Formal Groups”; this is consistent with findings from Cataldo and Nambiar[2012] and Cusick and Prasad [2006]. In this article we characterize OSS types, al-lowing practitioners to use them as reference (e.g., while applying results from Cusickand Prasad [2006]). It is out of our scope to provide an exhaustive treatment of OSSs’application in software engineering.

Finally, many works similar to Cusick and Prasad [2006] are available in litera-ture (e.g., Bird et al. [2009a], Turhan et al. [2009], and Meneely and Williams [2009])which investigate the influence of organizational decisions on collaborations and prod-uct quality aspects (both in open- and closed-source ecosystems [Pinzger et al. 2008;Tosun et al. 2009; Datta et al. 2011; Witten et al. 2001]). Literature also shows thatcollaboration and social structure are critical in multisite development and linked tofinal product quality [Herbsleb and Mockus 2003; Nagappan et al. 2008; Cataldo andNambiar 2009b; Bird et al. 2009a]. These studies motivate the need for OSSs in soft-ware engineering. Similar works could draw from our results a clear picture of possibleorganizational decisions, their mutual dependencies, and impact. Known attributes’ de-pendencies could support decision-making, by showing which attributes change with a

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 15: Base de datos EBSCO

3:4 D. A. Tamburri et al.

certain decision. For example, management decisions suggested in Cusick and Prasad[2006] may positively influence observed aspects but negatively influence OSS lifecycle.

3. APPROACH AND PRIMARY STUDIES

3.1. Research Approach

To attain our results, we conducted a Systematic Literature Review (SLR) of OSSs usinggrounded theory [van Niekerk and Roode 2009]. We set out to answer two researchquestions:

(1) What types of OSSs can be distinguished in literature?(2) What attributes can be identified for each type?

The approach can be organized in three steps: (a) create the set of articles to review(i.e., primary studies); (b) conduct the review; (c) analyze the data.Step (a) was carried out through a systematic search protocol [Kitchenham et al. 2008].The protocol is divided in three stages: (i) elaborate the search string; (ii) apply thestring on chosen search engines; (iii) extract primary papers from search results byfiltering out through exclusion criteria.

For stage (i) of the systematic search, the search string was determined by extract-ing relevant keywords directly from the research questions. To maximize the matchbetween the research questions and the search string we consulted with an externalexpert in software engineering and organizations research. The following search termswere selected:

(1) “organization” ∨ “organizational” ∨ “company” ∨ “enterprise” ∨ “firm” ∨ “struc-ture” ∨ “group”; (2) “social structure” ∨ “social network” ∨ “informal network” ∨ “in-formal group”; (3) “knowledge sharing” ∨ “knowledge management” ∨ “knowledge ex-change” ∨ “knowledge transfer”; (4) “community of practice” ∨ “communities of prac-tice” ∨ “knowledge community” ∨ “knowledge communities” ∨ “network of practice”∨ “networks of practice”. The preceding terms were combined in the following searchstring.

[4 ∨ (1 ∧ 2 ∧ 3)] ∧ [1 ∧ 2 ∧ 3 ∧ 4]

In stage (ii) of the systematic search, the string was applied to the following scholarlysearch engines: Software Engineering - ACM Digital Library, SCOPUS, IEEE XploreDigital Library, Science Direct SCOPUS, SpringerLink and Wiley InterScience; Knowl-edge Management - EBSCO electronic library, JSTOR knowledge storage, ProQuest-ABI/Inform; Multidisciplinary - ISI Web of Sciences. Finally, during stage (iii) of thesystematic search, the initial results were screened against inclusion and exclusioncriteria. We selected 122 peer-reviewed documents as primary studies and later added21 with subsequent searches. An overview is present online (see the Appendix). Thesedocuments span a wide array of fields: from cognitive ergonomics to social sciences tosoftware engineering.

Table I provides an overview (with rationale) of the criteria we used for screening.Step (b) and (c) of our SLR were carried out through a hybrid Glaserian-Straussian

Grounded Theory (GT) approach [Corbin and Strauss 1990]. GT means to systemat-ically apply a series of steps to allow a theory to emerge “spontaneously” from data(hence, “grounded”). This makes GT valid since the resulting theory is emergent fromdata rather than confirmed (or disproven). Each phase in GT is self-contained, incre-mental, and iterative [Corbin and Strauss 1990]. Our approach is structured as follows.

(1) Open Coding - (4 phases). (1) Pilot study: 28 primary studies were randomly se-lected to generate an initial set of codes. A second pass on the pilot papers wasapplied at the end of coding with the final list of codes, to minimize inconsistency;

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 16: Base de datos EBSCO

Organizational Social Structures for Software Engineering 3:5

Table I. Inclusion/Exclusion Criteria with Rationale

Inclusion Criteria Exclusion CriteriaA study that is mainly about social structures

in relation to any organization or practice.Rationale: we are interested in types and

attributes of organizational social structures.This implies that studies that are about

organizational social structures are relevant toour research questions. For instance, studies

that discuss community of practices within anorganization.

A study that is not about social structures in relationto an organization. Rationale: Studies that do not

investigate social structures in relation to anorganization (i.e. a company or an enterprise) arebeyond the scope of this research and should be

excluded. For instance, a study that is about an onlinesocial networking application like Facebook without

any relation to an organization.

One of the main objectives of the study is topresent a type of organizational socialstructure or a specific attribute of an

organizational social structure. Rationale: ifone of the objectives of a study is to present atype or attribute, it is relevant to our research

questions. For instance, a study thatintroduces an emerging type of social structure

for project-oriented organizations or a studythat discusses boundary-crossing attribute of

networks of practice.

A study which is marginally related to organizationalsocial structures. Rationale: For example, studies thatfocus on technologies related to organizational social

structures or studies that are mainly related tostatistics or social network analysis rather than social

structures themselves.

A study that is in form of a scientific paper(i.e. it must be peer reviewed and published

by a scientific publisher). Rationale: ascientific paper guarantees a certain level ofquality and contains reasonable amount of

contents. For instance, a journal paper.

A study that does not discuss any specific type orattributes of organizational social structures.

Rationale: if a study does not discuss types andattributes of organizational social structures and only

discusses them in general, it has no value to ourresearch questions and should be excluded. For

instance, studies about the importance oforganizational social structures.

(2) develop initial theory: based on the pilot study, an initial theory was generated;(3) constant comparison: the pilot study generated an initial set of 229 codes. Thesewere organized into a hierarchy of codes based on emerging relations between con-cepts. Thus structured, the start-up list of codes was used to code the rest of theprimary papers. Each paper was analyzed line by line with the list of codes. Acode was applied iff it reflected a concept in a paragraph. This process is known asmicroanalysis [Onions 2006]. (4) constant memoing: along step 3, notes were keptto capture key messages, relations, and observations on the study1.

(2) Selective Coding - (2 phases). (1) Axial coding: comparing the concepts coded led usto inductively generate relations among coded concepts (e.g., OSS types, attributesof OSS types, etc.); (2) aliasing: the definitions of all concepts coded were comparedwith each other to identify aliases.

(3) Theoretical Coding - (3 phases). (1) Data arrangement: we captured every portion oftext that was coded with a certain code in a table. Five tables were extracted, eachrepresenting a core concept observed in the literature. (2) data modeling: the datawas represented in a view, consisting of two diagrams. The first diagram showsthe OSS as well as their relations. The second diagram shows all the attributesand relations found (clustered according to core concepts identified); (3) theoret-ical sampling: the diagrams and all the data at hand were analyzed and sorted,trying to identify recurrent patterns, underlying relations, and hidden meaning.Our observation was aided by standard analysis methods such as weighted fre-quency analysis (i.e., by analyzing the number of times each type was encountered

1The labels used in all the models are in the form “M< x >.< y >”. The label schema locates the Y-th memoon the X-th table, for traceability (for all the material available online, refer to the Appendix).

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 17: Base de datos EBSCO

3:6 D. A. Tamburri et al.

Table II. Overview of OSS Types Clustered by Metatype

Community Network Group TeamCommunity of Practice (CoP);Knowledge Communities (KC);Strategic Communities (SC); InformalCommunities (IC); LearningCommunities (LC); Problem SolvingCommunities (PSC);

Network Of Practice(NoP); FormalNetworks (FN);Informal Networks(IN); Social Networks(SN);

Work Groups(WG); FormalGroups (FG);

Project Teams(PT);

weighted against the number of papers in which these were found) card sorting (byrearranging the hierarchy of types to let underlying relations show themselves),and conceptual modeling.

4. RESULTS

This section presents the overview of our three main results. As a first result, wefound that each OSS type can be uniquely identified with a single key differentiatingattribute. Additionally, OSSs are characterized by two key informations. The first in-formation is a dependency graph, made of attributes that depend on the OSS’s keydifferentiating attribute. The second information entails other attributes, whose valueis unique to that specific OSS type. Finally, the explicit relations among OSS types arerepresented in a UML-style metamodel (see Figure 2).

The second result presents a set of attributes, applicable (yet, nonnecessarily) to allOSS types. A UML-style class diagram shows the dependencies among these genericattributes. The third result presents an OSS transition system and the analysis ofthe patterns which compose it. We found that it can be used in two ways. First, thetransition system allows practitioners to predict OSS type changes. Analyzing andcomparing the current state of an OSS (i.e., its defining attributes) to the state ofothers in the system, one can predict the shift between them. Second, these patternsrepresent empirical evidence of effective OSS combinations.

4.1. OSS Types and their Relations

The literature we analyzed discusses 26 OSS types. However, for the sake of space, wediscuss the 13 most relevant OSSs2. Table II provides an overview of the 13 types, clus-tered according to their metatype (i.e., community, network, group, or team). Metatypesrepresent core categories in our results, found through Grounded Theory. Each OSSis described through a text and a table. The text explains the OSS’s general char-acteristics quoting from relevant papers and comparing with other types. The tablecontains: (a) the OSS’s key differentiating attribute (in bold); (b) the OSS’s dependencygraph (below the key differentiating attribute); (c) the other defining attributes for theOSS. The types are presented according to their metatype, and ordered by descendingrelevance in literature.

4.1.1. Communities. All communities are made for sharing. For example, in Communi-ties of Practice (CoPs), people sit in the same location, sharing a practice or passion.In Strategic Communities (SCs), people share experience and knowledge with the aimof achieving strategic business advantage for the corporate sponsors. Communities aregenerally associated with situatedness, however, we have found explicit indication ofthis only for CoPs. Finally, all communities are expected to exhibit a certain goal, be itpersonal, organizational, or both. Here follows the detailed definitions.

2Relevance was computed through a weighted bibliometric count, following suggestions in [Librarya 2010].

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 18: Base de datos EBSCO

Organizational Social Structures for Software Engineering 3:7

Fig

.2.

Met

amod

elof

OS

Sty

pes.

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 19: Base de datos EBSCO

3:8 D. A. Tamburri et al.

(1) Communities of Practice (CoP): The key differentiating attribute for CoPs is situatedsharing, or learning/sharing of a common practice (i.e., the attribute “situatedness”is a differentiator to identify a CoP). For example, the SRII events3 are gatheringsin which multiple CoPs (corporate and academic) meet physically to informallyexchange best practices in services science. Defining attributes for CoPs are inTable I. Quoting Wenger and Snyder [2000]

“[CoPs] are groups of people informally bound together by shared expertise andpassion for a joint enterprise. Engineers engaged in deep-water drilling, for exam-ple, consultants who specialize in strategic marketing, or frontline managers incharge of check processing at a large commercial bank.”

A CoP consists of groups of people who share a concern, a set of problems, or apassion about a topic, and who deepen their knowledge and expertise in this areaby interacting frequently and in the same geolocation. As such, CoPs serve as scaf-folding for organizational learning in one specific practice. They were found in 118papers but many other papers use this fundamental type to define and characterizeother types, as extensions or specializations of it (e.g., see the generalization linkbetween CoPs and NoPs in Figure 2). Within traditional software engineering, CoPsare constantly being exploited (most of the time unknowingly) by single (or more)corporation(s) to aggregate their employees together and allow them to exchangeideas and gather best practices. Literature presents evidence that CoPs are indeedmanaged by a leadership circle with egalitarian or loosely stratified forms [Kleinet al. 2005]. This therefore differentiates CoPs from LCs and KCs, since literaturestresses for them the presence of strong leadership and its inclination towards theaccumulation and dissemination of culture [Blankenship and Ruona 2009; Ruikaret al. 2009]. The presence of egalitarian forms suggests that a CoP looks much likea graph of peers, where nodes are equally shaped and characterized.

(2) Knowledge Communities (KC). The key differentiating attribute for KC is the visi-bility of its contents and results produced (i.e., the attribute “Visibility” is a differ-entiator to identify a KC). For example, global software engineering efforts can berepresented as KCs, since within them every branch of the development (both interms of knowledge exploited, and results produced) should be visible to all others[Richardson et al. 2010]. Defining attributes for KCs are in Table IV. Quoting fromDickinson [2002]

“Virtual knowledge communities are organized groups of experts and other in-terested parties, who exchange knowledge on their field of expertise or knowledgedomain in cyberspace within and across corporate and geographical borders. Vir-tual knowledge communities focus on their knowledge domain and over time ex-pand their expertise through collaboration. They interact around relevant issuesand build a common knowledge base.”

Essentially KCs are groups of people with a shared passion to create, use, andshare new knowledge for tangible business purposes (e.g., increased sales, increasedproduct offer, clients profiling, etc.). The main difference with other types is thatKCs are expected (by the corporate sponsors) to produce actionable knowledge(knowledge which can be put to immediate action, e.g., best practices, standards,methodologies, approaches, problem-solving-patterns, etc.) into a specific businessarea. What is interesting is that these are expected to produce actionable knowl-edge, even if corporate sponsors do not formally state the expectations. On thecontrary, management practices are usually put in place to make sure this expec-tation comes true. Moreover, they are not limited to use electronic communicationand collaboration means (such as NoPs) but rather they can colocate meetings or

3www.theSrii.org.

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 20: Base de datos EBSCO

Organizational Social Structures for Software Engineering 3:9

Table III. CoP Defining Attributes

Attribute in Communities of Practice

“Situatedness” For CoPs situated learning or practice is expected, informal andcollective, in that members learn through reciprocity within the

(localized) CoP.

“Situated Learningdependency graph”

“Reciprocity” According to literature, reciprocity of knowledge exchange is commonpractice in CoPs, since members carry out knowledge activities with each

other, almost exclusively.“Shared Repository” Literature shows the enforced presence of a shared repository in every

CoP, aimed at containing and easing the distribution of knowledge.“Members Official Status” Literature indicates that there is no official status for CoP members

other than participation. A professional spontaneously participateswithin the CoP and is accepted as a member.

“Shared Practice” Literature shows clearly that CoPs are indeed based around the sharingof a common practice.

“Membership CreationProcess”

Literature indicates two possible ways for membership creation tohappen in CoPs, namely: Self Selection, i.e. autonomous

self-appointment; Promotion, i.e. new members are elected by promotionfrom organizational sponsors or CoP management.

“Health” For CoPs, literature indicates this attribute to be proportional to goodgovernance, i.e. to the correct maintenance of the CoP through

governance practices.“Knowledge Activities’ Within a CoP literature stresses all possible knowledge activities can be

carried out.“Perceived

Competitiveness”Literature identifies CoPs as non-competitive.

“Creation Process” CoPs’ creation process is spontaneous and emergent.“Visibility” CoPs are invisible according to literature. Therefore only visible to who

knows of its existence already“Communication Media” Literature suggests CoPs are commonly using all identified media (e.g.

electronic, paper, publicity, etc.).“Organizational Goal(s)” Four organizational goals were identified in literature for CoPs, namely:

Decrease Learning Curve, Prevent Employee Turnover, ManageInformation Flow, Produce Actionable Knowledge.

“Cognitive Distance” Literature identifies cognitive distance for CoPs as high.“Contract Value” Literature indicates contract value for CoPs is limited in that

organizational sponsors have little or no expectations from CoPs.“Context Openness” Within a CoP, egalitarianism is enforced thereby strengthening the

equality of communication.“Size” A CoP is indicated in literature as being big, i.e. with a total number of

elements between 100 and 1000.“Orientation” A CoP can be employed in both the possible orientations, namely,

strategic and operational.

workshops to devise or explore new ideas (much like CoPs). Literature also stressesthe presence of management and its key role in fostering community visibility [Leeand Williams 2007]. This suggests that a KC, in contrast to a CoP, looks much likea mixed hierarchy/heterarchy in which members and leadership have clear-cut dis-tinctions and roles [Dickinson 2002]. Another remarkable difference between KCsand CoPs is that CoPs need situatedness to exist, while KCs are often associated

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 21: Base de datos EBSCO

3:10 D. A. Tamburri et al.

Table IV. KC Defining Attributes

Attribute in Knowledge Communities

“Formal Status” Literature identifies knowledge communities as aggregates of peoplewhich have a formal and acknowledge status for the organizational

sponsor.“Visibility” Knowledge communities need increased visibility (and

management practices to maintain it) to make their practicesand results known to the organizational context.

“Visibility dependencygraph”

“Management Practices” Since the organizational sponsor has precise expectations from the KC, itapplies a combination of management practices to its operational

effectiveness.

with virtual spaces, for example, as described in Rosso [2009] for the Apache Web-Server knowledge network. Given these differences, the topology of a KC wouldalso differ from that of a CoP, since its nodes would neither be constrained to asingle geolocation nor be of a single type. This difference also suggests that whileCoPs may be more frequent within the boundaries of a single corporation, KCs maybe transversal in nature, encompassing multiple experts from multiple (partner)organizations [Dickinson 2002].

(3) Strategic Communities (SC). The key differentiating attribute for SCs is the con-tract value (i.e., the attribute “Contract Value” is a discriminator to identify an SC)that should be maintained (e.g., by generating ad hoc best practices) or generated(e.g., by analyzing strategic market sections ripe for growth). In software engineer-ing, for example, SCs are commonly associated with mission-critical systems thatneed 24/7 availability, for example, ESA, the European Space Agency, uses so-called“tiger teams” to investigate and solve specific software problems during mission en-gineering; tiger teams are selected from a loosely specified (cross-organizational)strategic community of experienced practitioners in software engineering and re-search: their focus is to support specific missions, safe-guarding the people andinfrastructure involved (i.e., the contract value is human, monetary, and techno-logical). Defining attributes for SCs are in Table V. Quoting from Blankenship andRuona [2009].

“Strategic communities are formalized structures that consist usually of a limitednumber of experts within a single organization. These share a common, work-related interest into producing unexpected ideas to achieve strategic advantage[Hustad 2010]. These communities are intentionally created by the organization toachieve certain business goals.”

SCs consist of meticulously selected people, experts in certain sectors of interestto a corporation or or a set of organizational partners tied with formal nondisclo-sure agreements. These try to proactively solve problems within strategic businessareas of the organizational sponsor. Compared to the others, SCs indeed seem verysimilar to KCs in that they are targeted towards obtaining business advantage.Similarly to WGs, SCs differ from KCs in their level of granularity, formality,and openness of membership as well as contract value [Hansen 1999]. KCs act onwider business sector (pro project) than SCs, and their membership is usually openbut subject to management practices (e.g., evaluation). SCs, on the other hand,

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 22: Base de datos EBSCO

Organizational Social Structures for Software Engineering 3:11

Table V. SC Defining Attributes

Attribute in Strategic Communities

“Formal Status” Literature shows that SCs have a strongly formal acknowledged statusby the organizational sponsor. Indeed SCs are specifically created by the

sponsor to pursue a specific business goal.“Organizational Sponsor” In line with their formal status, strategic communities are fathered by

the sponsors they are trying to support. The organizational sponsorprovides fostering support after fathering the organization.

“Members PreviousExperience”

People appointed to a SC need to have a strong background in order to beselected for participation or membership.

“Personal Goal” Every member shares with the other a deep interest on tackling acommon work-related issue. This issue is personal since it inhibits thework productivity of the single individuals (e.g. tackle the geographical

distance of two sites which need to collaborate).“Contract Value” Since the SC is needed to tackle a work-related issue, the

business gain from its success is strongest. As a consequence, thecontract value invested by the organizational sponsor is highest.

“Contract Valuedependency graph”

-

concentrate on very tight business missions, commanded by an organizationalsponsor. They are expected to produce strategic advantage through innovation orbest practices [Erik Andriessen 2005]. Also, they are different from most othercommunities since the members are appointed based exclusively on experience,personal success, career titles, and anything that can justify their status of experts[Ruuska and Vartiainen 2003]. Given the right social network analysis tools,one could identify a strategic community by investigating the cliques of expertscommunicating in a certain business sector. Given that membership is enforcedthrough management, the node types in SCs would be extremely similar if notidentical. Finally, sometimes in their lifecycle, SCs are similar to (and sometimesidentified with) ICs [Erik Andriessen 2005]; reasons can vary (e.g., SCs could bedormant, waiting for organizational objectives to fulfill).

(4) Informal Communities (IC). The key differentiating attribute for ICs is the highdegree of member engagement (i.e., the attribute “Member Engagement” is adiscriminator to identify an IC). An example in software engineering can be seenin the Agile Movement, whose success is decided explicitly by the contributionsof its members (i.e., their engagement in actively disseminating and using “agile”practices). In addition, open-source communities considered in Witten et al. [2001]and Meneely and Williams [2009] also match our definition of an IC, adding to itsome feats of CoPs. Defining attributes for ICs are stated in Table VI. According toDickinson [2002] the IC term “was coined in 1968 by the Internet pioneers J.C.R.Licklider and R.W. Taylor who, when wondering what the future online interactivecommunities would be like, predicted that they would be communities of commoninterest, not of common location made up of geographically separated members.Their impact will be great both on the individual and on society.”

ICs are usually sets of people part of an organization, with a common interest,often closely dependent on their practice. They interact informally, usually acrossunbound distances, frequently on a common history or culture (e.g., shared ideas,experience etc.). The main difference they have with all communities (with theexception of NoPs) is that their localization is necessarily dispersed so that thecommunity can reach a wider “audience”. Loosely affiliated political movements(such as Greenpeace) are examples of ICs: their members, disseminate their vision(based on a common idea, which is the goal of the IC). Also, the success of the IC

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 23: Base de datos EBSCO

3:12 D. A. Tamburri et al.

Table VI. IC Defining Attributes

Attribute in Informal Communities

“MembersEngagement”

The engagement of members in informal communities is kept highsince there is no formal arrangement linking members to the

community. High engagement acts as a glue for the community.

“MembersEngagement

dependency graph”“Personal Goal” The informal community promotes mutual learning. The practice held within

the community itself favors individuals part of the community to share theirown insight with others and learn, in turn.

is exclusively tied to members’ engagement since their effort is what drives thecommunity to expand, disseminate ideas, gather members, and so on. One charac-teristic which also differentiates ICs from other communities is the assumption ofself-organization. This makes them very similar to networks rather than commu-nities [Mentzas et al. 2006]. From a topological perspective, informal communitieswould exhibit self-similarity, as a consequence of self-organization [Hustad 2010].However, nodes would be egalitarian (much like in CoPs) and highly dispersed(much like in NoPs). We weren’t able to determine if ICs can be seen as a mid-stagein the transition from a CoP (purely situated) to an NoP (purely geodispersed).

(5) Learning Communities (LC). What differentiates this particular type from othersis its explicit goal of incrementing, tracking and maintaining the organizationalculture of an organization (the attribute “Organizational Culture” is differentiatorwhen identifying an LC). A perfect example of an LC within software engineeringis the BORLAND Academy4. BORLAND Academy and similar institutions areused by the organization’s practitioners to learn about its current practices,methods, processes, and tactics, as well as refine these over time as their productportfolio evolves. Defining attributes are contained in Table VII. LCs are, quotingBlankenship and Ruona [2009]

4Recently discontinued after BORLAND’s acquisition by MicroFocus.

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 24: Base de datos EBSCO

Organizational Social Structures for Software Engineering 3:13

Table VII. LC Defining Attributes

Attribute in Learning Communities

“Leadership” in learning communities leadership must be strong to maintain motivation,and steer the learning practices.

“OrganizationalCulture”

Learning practices promoted by leadership increase theorganizational culture which the learning community is

maintaining. The organizational culture maintained by a learningcommunity is as strong as the community’s lifespan.

“OrganizationalCulture dependency

graph”

“structures that provide space for learning and sharing knowledge. Much ofthe workplace learning community literature is situated within the educationliterature, where the structure is referred to as a Professional Learning Com-munity (PLC). Stoll, Bolam, McMahon, Wallace, and Thomas (2006) agreedthat although there is no universal definition for PLCs, there is internationalconsensus that a PLC is a Ògroup of people sharing and critically interrogatingtheir practice in an ongoing, reflective, collaborative, inclusive, learning-oriented,and growth-promoting way”.

LCs provide a space for pure learning and explicit sharing of actionable knowl-edge (i.e., skills). In a learning community, the leadership is expected to steerthe community’s practices and membership is subject to approval and tied to thelearning objectives given to the member [Bogenrieder and Nooteboom 2004]. Eachdeveloped or exchanged practice must become part of the organizational culture.Topics of learning are important for both business sponsors and participants (e.g.,personal development skills, management skills, time optimization skills, etc.)[Ruuska and Vartiainen 2003]. The nature of discussed topics makes it extremelydifferent from other communities: most other communities have either personal ororganizational goals, while LCs have both and pursue them equally. Maintainingand transmitting organizational culture is the chief aim of an LC and this makesit even more specific than PSCs. From a topological perspective, LC would appearas directed structures (e.g., directed graphs) since learning usually takes place oneway only, that is, from learning manager to learner [Ruuska and Vartiainen 2003].

(6) Problem Solving Communities (PSC). The key differentiating attribute for PSCs istheir goal, that of solving a specific problem in the scope of an organization or corpo-rate sponsor (i.e., the attribute “Organizational Goal” is a differentiator for PSCs).For example, during the Apollo 11 mission, NASA adopted specific groups of action,to solve issues which impeded or limited the mission success, as well as to establishcritical problem-solving practices [Riley and Dolling 2011]. These groups of actioncould be considered as PSCs. Defining attributes are contained in Table VIII. PSCs

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 25: Base de datos EBSCO

3:14 D. A. Tamburri et al.

Table VIII. PSC Defining Attributes

Attribute in Problem Solving Communities

“Membership OfficialStatus”

a peculiar characteristic of problem solving communities is theirpotential of having both formally participating members and

informal/occasional participants as needed. The community, since itstargeted to a specific problem or issue, upholds the participation of

(occasional) professionals which can help or consult with the problems athand.

“Organizational Goal” problem solving communities are bent to resolving specific issuesof relevance to their organizational sponsors. They are explicitlycreated to address a specific goal (i.e. solving a specific problem).

“Organizational Goaldependency graph”

consist generally of many geographically and organizationally dispersed employeesof the same discipline. They focus on mitigating a specific, well-defined hazard (e.g.,communities of anticontamination engineers). Quoting from Hustad [2010]

“The problem-solving network is a distributed network of practice which meetsthe criteria of an expert group. The network provides resources in terms ofhelp-desk functions where participants of the network support other colleagues bygiving them special advice as regards particular business problems. In addition,participating in this kind of network ensures collaborative learning among theparticipants of the network.”

In comparison to other types, a PSC can be seen as a specific instance of aStrategic Community focused on a particular problem. One would expect thiscommunity to be formal in nature. Contrarily we found that they emerge asinformal, since informality aids immediate problem-solving processes [Allen et al.2007], since the goal in PSCs is not achieving business advantage, but rathersolving a critical problem (either immediate, demanding, often occurring, ordisastrous). Much like KCs and ICs, PSCs use both face-to-face digital technologies(e.g., forums) to enact problem-solving [Uzzi 1997]. From a topological perspective,a PSC would appear very similar to an IC where organizations acknowledge nodes’membership and provide problems to be solved (i.e., the organization is an activepeer in the PSC). This characteristic makes the nature of PSCs very peculiar, sinceit’s the only community type in which organizational sponsors are themselvesmembers, they possess the problems, and other members interact with them inegalitarian forms. Additional research should be invested in representing PSCswith a 2-mode network [Rosso 2009] and studying this condition further.

4.1.2. Networks. Networks suggest the presence of digital or technological supporttools, however, we found this explicitly only for NoPs. All network types are used by anorganization to increase reachability, either through formal means (in FNs) or throughinformal ones (in INs) or through customized forms of boundary spanning (e.g., inNoPs). Here follow detailed definitions.

(7) Networks of Practice (NoP). The one differentiating attribute for NoPs is informalityin geolocalized practice (i.e., the attribute “geodispersion” is a differentiator toidentify an NoP). In global software engineering many (virtual) teams collaboratetogether through the Internet across timezones, with specific networks (e.g., VPNs)and with strong management and governance policies in place. However, the social

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 26: Base de datos EBSCO

Organizational Social Structures for Software Engineering 3:15

structure in GSE is not fully specified nor supported and matter of current researchTamburri et al. [2012]. NoPs’ defining attributes are in Table IX. Quoting fromHustad [2010].

“NoP comprises a larger, geographically dispersed group of participants engagedin a shared practice or common topic of interest [...] CoPs and NoPs share thecharacteristics of being emergent and self-organizing, and the participants createcommunication linkages inside and between organizations that provide an “invisi-ble” net existing beside the formal organizational hierarchy”.

An NoP is a networked system of communication and collaboration that connectsCoPs (which are localized). In principle anyone can join it without selection of can-didates (e.g., OpenSource forges are an instance of NoP [Parreiras et al. 2004]).NoPs have a high geodispersion, that is, they can span geographical and time dis-tances alike. The high geodispersion increases their visibility and the reachabilityby members. An unspoken requirement for entry is the expected IT literacy ofmembers. IT literacy must be high since the tools needed to take part in NoPs areIT based (e.g., microblogs, forums, hang-outs, etc.). NoPs are much like CoPs, sincethey share repositories of knowledge through their members. Different from CoPs,NoPs can be seen as IT-enabled global networks, since their chief aim is to allowcommunication (and collaboration) on the same practice through large geographi-cal distance. In the scope of software engineering, NoPs have been used massivelywithin global software engineering (mostly unknowingly or with limited support).For example, Bird et al. [2009b] discuss socio-technical networks in software engi-neering. Based on our definitions such networks sit at the intersection between SNsand NoPs, since they address a specific practice but show a strong social connota-tion. Works similar to that in Bird et al. [2009b] could benefit from a more detailedoverview of both SNs and NoPs, for example, to understand which of their charac-teristics are supporting failure prediction. Moreover, socio-technical congruence, asexpressed in Cataldo et al. [2008], could be studied monitoring the organizationalattributes that influence collaboration: this could lead to better understanding ofwhat hinders global developers’ productivity.

(8) Informal Networks (IN). The key differentiating attribute for INs is the type ofinteraction that binds its members (i.e., the attribute “members interaction” is adifferentiator to identify an IN). Informal networks have existed in software en-gineering since its very beginning: all people participating within any softwareprocess collaborate and interoperate within a web of social ties that could be de-fined as an IN. Also, in academia, the informal and loosely coupled set of researchcommunities can be considered a world-wide informal network. Defining attributesfor INs are contained in Table X. Simple networks emerge based on the relation-ships that individuals form with others. They are the building blocks from whichother social structures may emerge. Anyone can join an IN, since there are no for-mal subscription processes: membership is often based on collective acceptance of acertain individual (e.g., establishment of friendship between flat-mates). Moreover,INs are essential foci for information exchange. Quoting Vithessonthi [2010].

“The literature on social networks suggests that an informal social network canplay a key role in enhancing organizational learning since social networks can be asource of information (Liebeskind et al., 1996). For example, Liebeskind et al. (1996)have found that social networks tend to extend the scope of learning and help theintegration of knowledge possessed by two firms due to their collaboration”.

As compared to the other types, INs can be seen as looser networks of ties betweenindividuals that happen to come in contact in the same context. The driving force ofthe informal network is the strength of these ties between members. Finally, an INdiffers from other types since it does not use governance practices but its success

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 27: Base de datos EBSCO

3:16 D. A. Tamburri et al.

Table IX. NoP Defining Attributes

Attribute in Networks of Practice

“Boundary Spanning” Literature identifies boundary spanning as a very common practice inNoPs. NoP’s have larger geographical dispersion and therefore larger

context with which to interact.“Members Motivation” Literature identifies members’ motivation as high in NoPs.

“Members SelectionProcess”

Selection for the entitlement to a Network of Practice is said to be drivenby entry requirements.

“Support Tool” Given their nature, literature identifies many support tools thatcharacterize a NoP. These are: Digital Communication Platforms,

Persistent Computer Networks as well as Interactive on-linecommunities.

“CommunicationOpenness”

Communication is identified as Open for NoPs. Therefore evenprofessionals which are not strictly part of the network as members,

might make use of its resources (e.g. online community forums which canbe consulted by anyone).

“Management Practices” Strong management practices are needed to maintain a NoPs.“Knowledge Type” Literature conveys that NoPs are particularly good to harvest tacit

knowledge in the minds of the members (e.g. posting a quesiton onon-line forums and everybody answering with their own insight).

“Geodispersion” Literature suggests that the whole NoP is geodispersed, i.e. everynode can be distant from others in both time and space.

“Geodispersiondependency graph”

is solely based on the emergent cohesion its members have (therefore its successdepends on the type of people that the IN comes into contact with and assimilates,rather than its internal dynamics).

(9) Formal Networks (FN). In formal networks memberships and interaction dynamicsare explicitly “made” formal by corporate sponsors (i.e., the attribute “MembershipOfficial Status” is differentiator when identifying an FN). Conversely, although for-mally acknowledged by corporate sponsors, FGs are (commonly) informal in nature,and are grouped for governance or action. An example in software engineering isthe OMG (Object Management Group): it is a formal network, since the interactiondynamics and status of the members (i.e., the organizations which are part of OMG)are formal; also, the meeting participants (i.e., the people that corporations send asrepresentatives) are acknowledged formally by their corporate sponsors. An evenmore interesting example of an FN is the structure emerging between corporatepartners with offshoring agreements. They constitute an FN (e.g., as discussed inCusick and Prasad [2006]) since both partners need to agree on their membershipstatus through formal agreements and interoperation procedures. Moreover, eachsite needs to be governed with clear role definitions and responsibilities. In addi-tion staff is usually managed as FGs as suggested in Cusick and Prasad [2006],this implies that FNs can be seen as virtual counterparts connecting local FGs, forexample, in offshoring partnerships. We did not find any indication of inheritancebetween FGs and FNs. Defining attributes are contained in Table XI. FNs are,according to Allen et al. [2007],

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 28: Base de datos EBSCO

Organizational Social Structures for Software Engineering 3:17

Table X. IN Defining Attributes

Attribute in Informal Networks

“Members Interaction” interaction in informal networks is intended as a social andinformal interaction between individuals

“Members Interactiondependency graph”

-

“Critical Success Factor” a critical success factor in informal networks is a strong set of tiesbetween the whole network and its members. This can keep motivation

and participation high.“Organizational Sponsor” the organizational sponsor, where present. should limit its interactions

with the community to mere participation. Its interference should bereduced to an essential minimum only.

“Connectedness” The connectedness of participants depends on the degree of informalityand organizational embeddedness the network has to an organizationalsponsor. Therefore the connectedness in an informal network is custom.

“Members Cohesion” cohesion of members is based on mutual need. if strength of ties is kepthigh then as a consequence mutual need is high. Therefore members

cohesion is also high.“Trust In The Context” trust in the context is assumed to be present in an informal network.

given the informality of definition trust is assumed and not maintained.“Trust In Partner” the same goes for the trust each member retains on its connections. Since

it’s an informal establishment, trust is assumed, not maintained orguaranteed in any way.

“OrganizationalEmbeddedness”

embeddedness practices should be limited to encouragement of activities.

Table XI. FN Defining Attributes

Attribute in Formal Networks

“Membership OfficialStatus”

the status of formal networks’ members is officially and formallyrecognized by the organizational sponsor.

“Membership OfficialStatus dependency

graph”“Creation Process” Creating a membership in a formal network requires an invitation or

some kind of formal appointment by the organizational sponsor. Also, theorganizational sponsor usually nominates management for the formal

network or uses management teams internal to the organization.

“formal [social] networks are those that are prescribed and forcibly generatedby management, usually directed according to corporate strategy and mission”.

Within FNs, members are rigorously selected and prescribed. They are forciblyacknowledged by management of the network itself. Direction is carried out ac-cording to corporate strategy and its mission is to follow this strategy.

(10) Social Networks (SN). In social sciences, the concept of social networks is oftenused interchangeably with OSSs. Since every OSS type can be defined in termsof SNs, there is no distinctive difference with other types; rather, SNs can beseen as a supertype for all OSSs. To identify the presence of an SN (or OSS) it issufficient to be able to split the structure of an observable set of organizational

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 29: Base de datos EBSCO

3:18 D. A. Tamburri et al.

Table XII. SN Defining Attributes

Attribute in Social Network Sites

“Goals” A social network has two main goals: the construction of sharedknowledge and the harnessing of internet-based technologies to enable

socially aware networking“structure” The structure of a social network can be divided into two:

macrostructure details regard the highest level of abstraction ofa social network while microstructure details characterize thenature, type and attributes of the social ties of single “nodes” in

the social network

“structure dependencygraph”

patterns (e.g., organization interactions, teams’ interactions, etc.) into macrostruc-ture (i.e., structure of social ties and interactions in the large [Johnsen 1985]) andmicrostructure (i.e., structure of social ties and interactions at the single social-agent level [Fershtman and Gandal 2008]). The defining attributes for SNs arecontained on Table XII. Of particular interest is the definition present in Hatalaand Lutta [2009]

“Social structure can be viewed as a set of actors with the additional propertythat the relational characteristics of these networks may be used to interpret thesocial behavior of the individuals involved”.

SNs represent the emergent network of social ties spontaneously arising be-tween individuals who share, either willingly or not, a practice or common in-terest on a problem. SNs act as a gateway to communicating communities.With the advent of internetworking technologies, they are now explicitly sup-ported by technologies and massively used in software engineering (e.g., LinkedIn,Academia.Edu, Google+, etc.).

4.1.3. Groups. Groups are tightly knit sets of people or agencies that pursue an or-ganizational goal. Cohesion in groups can be an activation mechanism (to increaseengagement, as in WGs) or a way to govern people more efficiently (e.g., in FGs). Herefollow detailed definitions.

(11) Workgroups (WG). The key differentiating attribute for WGs is the cohesion oftheir members: they need to work in a tightly bound and enthusiastic manner toensure success of the WG (i.e., the attribute “Members Cohesion” is a differen-tiator to identify a WG). The IFIP WG 2.10 on software architecture5 is a WG,since its effort is planned and steady (i.e., cohesive), as well as focused on pur-suing the benefits of certain organizational sponsors (e.g., IEEE, in the case ofIFIP). Table XIII contains their defining attributes. WGs are defined as groupsof individuals who work together on a regular basis to attain goals for the benefitof (potentially multiple) corporate sponsors. In Soekijad et al. [2004],

5http://www.softwarearchitectureportal.org/.

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 30: Base de datos EBSCO

Organizational Social Structures for Software Engineering 3:19

Table XIII. WG Defining Attributes

Attribute in Work Groups

“Membership Formality” The formality of a WG is established and acknowledged by an explicitorganizational sponsor which establishes both the operative goals of the

workgroup and the expected outcomes of each effort track it must beinvolved in.

“Members Cohesion” Cohesion practices within the WG must be well defined to allowits productivity to remain high. Besides establishing goals for

the WG, the organizational sponsor should also definerequirements that the WG must adhere to, in order to operateexactly on the issues it is targeted to issue. Both requirements

and goals maintain the cohesion of members high.

“Members Cohesiondependency graph”

“three characteristics [of WGs] are central: [they have] multiple goals, multiplecompositions, and multiple results”.

WG’s goals can be multiple and spanning an array of organizational factors.Therefore, the expected or produced benefits can be wide as well. What’s also fun-damental is that a WG is always acknowledged and supported by organizationalsponsor(s). Differentiating it from other types is its level of granularity: PTs acton specific projects (i.e., domain-specific, well-specified problems, with a clear setof goals and complementary sets of skills) while KCs focus on specific businessareas and goals. A WG has a wider agenda and uses experts with similar skillsand interests to tackle strategic issues, for example, developing practices that canspan multiple business areas or observing the enterprise to drive organizationalchanges. Software engineering practitioners have been using WGs to generatestandards and evaluate practices for standardizations. Although these examplessuggest that WGs cannot be used to develop software, in Pinzger et al. [2008]authors observe that software developers should “[...] follow a well-defined goaland keep their work focused”. Using the observation in Pinzger et al. [2008] as arule would make development groups more similar to WGs than textbook PTs.

(12) Formal Groups (FG). The key differentiating attribute for FGs is in their gover-nance practices, which must be declared upon creation of the formal group (i.e.,the attribute “Governance” is determinant to identify this type). Literature refersexplicitly to formal groups as sets of project teams with a particular mission (oralso, groups of people from which project teams are picked). Examples of formalgroups in software engineering are software taskforces, such as the IEEE Open-Source Software Task Force6. Defining attributes are contained in Table XIV. FGsare exemplified in Hustad [2007] as

“[groups of] teams and/or working groups [. . . ]. numerous different definitionsof diversity have been put forth; however, they generally distinguish between twomain sets of characteristics [for FGs]: (1) diversity of observable or visible de-tectable attributes such as ethnic background, age, and gender; (2) diversity withrespect to nonobservable, less visible or underlying attributes such as knowledgedisciplines and business experiences”.

6http://ewh.ieee.org/cmte/psace/CAMS taskforce/index.htm.

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 31: Base de datos EBSCO

3:20 D. A. Tamburri et al.

Table XIV. FGs Defining Attributes

Attribute in Formal Groups

“Organizational Goal” Formal groups are appointed within an organization and always carry anorganizational goal (e.g. corrdination, performance, evaluation, etc.)

“Members Official Status” The official status of members is formal since the organization appointseach member upon joining the organization or division

“Governance” A formal group always establishes diverse governance practices.The most common are Routinization (e.g. fixed hang-outs or

meetings, stand-up status reports, etc.); emotional management(e.g. counseling of less performant engineers, etc.); control (e.g.

micro-management of development tasks, etc.); ScientificManagement (e.g. management of the methods and approaches,

development and promotion of best practices, etc.).

“Governancedependency graph”

FGs are comprised of people which are explicitly grouped by corporations to acton (or by means of) them (e.g., governing employees or ease their job or practiceby grouping them in areas of interest). Each group has a single organizationalgoal, called mission (governing boards are groups of executives whose mission isto devise and apply governance practices successfully). In comparison to FormalNetworks, they seldom rely on networking technologies, on the contrary, they arelocal in nature. Finally, it is very common for organizations to have these groupsand extract project teams out of them.

4.1.4. Teams. Teams are specifically assembled sets of people with a diversified andcomplementary set of skills. They always pursue an organizational goal with clear-cutprocedures and activities. Finally, all project teams exhibit a longevity which is boundto a project or product. Here follow detailed definitions.

(13) Project Teams (PT). The key differentiating attribute for PTs is their longevity,tied to a specific project (i.e., the attribute “longevity” is a differentiator to identifyPTs). Their defining attributes are in Table XV. In Lindkvist [2005], the authorprovides a general definition of PTs with the following words:

“[PTs are] temporary organizations or project groups within firms [that] consistof people, most of whom have not met before, who have to engage in swift social-ization and carry out a prespecified task within set limits as to time and costs.Moreover, they comprise a mix of individuals with highly specialized competences,

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 32: Base de datos EBSCO

Organizational Social Structures for Software Engineering 3:21

Table XV. PT Defining Attributes

Attribute in Project Teams

“Longevity” Longevity of a project team should be limited to the project forwhich it was selected and assembled. The team has the chance to

be reunited again for similar projects, based on performance.“Longevity dependency

graph”-

“Knowledge Activities” The knowledge activities within a project teams are limited to the use inpractice of what knowledge is available already or from the context (or by

means of an OSS to which the project team is made a member of)“Proficiency Diversity” The proficiency diversity when selecting team members should be

maintained as complementary. All project members should complete eachother in terms of skills.

“Organizational Slack” The organization or sponsor reserves (or should reserve) no operationalslack to the project team. This means that the deadlines assigned to acertain team for a certain workpackage should be strict and ultimate.

“Organizational Goal” The organizational goal of the OSS emerging in teams is that ofdelivering the software (piece) it is being used to develop

“Critical Success Factors” Four critical success factors are stated as being at stage in project teams:creative problem solving by its members; a social network for the team

and the team alone; a weak set of ties between the members of the team;a technological gate-keeper which mediates the team’s communicationwith the rest of the world for technical, operational or organizational

issues.“Size” The size of a project team should be extremely small (i.e. never more

than 5–10 elements) and localized.“Creation Process” The creation process of project teams should be formal and operated by

the organization or organizational sponsor.“Members Cohesion” Milestone-based and social-closeness based cohesion practices should be

adopted to properly nurture the teams’ operation.“Members Previous

Experience”Previous experience of members should be cross-functional.

making it difficult to establish shared understandings or a common knowledgebase”.

PTs are made by people with complementary skills who work together to achievea common purpose for which they are accountable. They are enforced by theirorganization and follow specific strategies or organizational guidelines (e.g., time-to-market, effectiveness, low cost, etc.). Their final goal is delivery of a product orservice which responds to the requirements provided. Compared to the other OSSs,PTs are the most formal type, comparable (in terms of formality) only to the formalgroups type (of which it is an instance and from which they are picked). PTs aredefined as strict and single-minded aggregates of people, (closely) collaborating onwell-defined reification tasks (i.e., tasks which produce a tangible artifact whichjustifies their effort). Within software engineering, PTs are constantly used as thebasic logical unit for software production.

4.2. OSS Types’ Relations

The 13 types are also reported in the UML-style metamodel in Figure 2. This showsall the relations found among them. The note on top of each type carries the numberof publications in which it was found.

The two most general types in the metamodel are SNs and PTs. These two typesappear at the top of the diagram. Our metamodel shows that INs and FNs are siblingtypes, deriving from SNs.

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 33: Base de datos EBSCO

3:22 D. A. Tamburri et al.

Fig. 3. OSS patterns found.

This indicates that research currently approaches OSSs from the two perspectivesof SNs or PTs. The contact point between organizational research and social networkanalysis (or research in social networks) is in the CoP type, which inherits andaggregates INs. CoPs generalize four types: KCs, NoPs, WGs, and SCs. LCs are at thesame abstraction level as INs (probably because learning is “informal” itself), whileICs seem an intermediate type between CoPs and NoPs since they aggregate CoPsand are more specific than NoPs.

PTs are a “plug-in” type to CoPs since they are instances of FGs and aggregate intoFGs (e.g., for governance), CoPs (e.g., communities of interest or interest groups), orNoPs (in case of virtualization of teams, as in global software engineering). Finally,PSCs are specific instances of NoPs (e.g., focused on a specific set of problems).

4.3. OSS Patterns and Transitions

The OSS metamodel (see Figure 2) exhibits the recurring patterns shown in Figure 3.According to pattern, (a) NoPs aggregate multiple CoPs, which can be further special-ized into NoPs. Similarly, according to pattern (b), CoPs aggregate multiple INs, whichcan be further specialized into CoPs. Finally, pattern (c) shows that NoPs can be madeof CoPs, CoPs themselves can be made of PTs and finally, NoPs can be a specific typeof PTs (i.e., virtual teams).

4.4. Additional Generic Attributes

In addition to OSS defining attributes (contained in the definition of specific OSStypes), we found 15 attributes which are applicable to all OSSs. The following enu-meration distinguishes between attributes internal to OSSs (beginning with “OSS::”)and attributes which belong to the environment or context of an OSS (beginning with“Context::”).

(1) OSS::lifecycle: This consists of the (planned) steps that the OSS is expected to gothrough during its lifespan. The Software Process’ intermediate steps constitutethe lifecycle of the software engineering OSS.

(2) OSS::lifespan: This is the projected length of time within which the OSS is con-sidered operational. The planned time-to-market of a software being engineered(in addition to its operational expectance-of-life) is the lifespan of a softwareengineering OSS.

(3) OSS::Goals: These are the aims that the OSS is going to work towards. These areeither emergent or enforced through an organizational sponsor. The delivery ofa software product meeting, all the stakeholders’ requirements, and within theirspecific constraints constitutes the goal of a software engineering OSS.

(4) OSS::Barriers: These are impediments, physical or otherwise, which hinder theoperations of OSSs. Governance practices can tackle these barriers through

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 34: Base de datos EBSCO

Organizational Social Structures for Software Engineering 3:23

barrier mitigation mechanisms. Time and distances in software engineering areinstances of OSS barriers.

(5) OSS::Governance Practices: These are activities which create, decide upon,steer, or enforce organizational issues in OSSs. Deciding brand and structureof workbenches for every developer and arranging together available developersinto development units are governance practices in software engineering OSSs.

(6) OSS::Critical Success Factors: These are factors which are necessary to achievesuccessfully the goals established for the OSS within the boundaries of its pro-jected lifespan and following the enforced governance practices. 24/7 availabilityin fault-tolerant systems is an instance of critical success factors in softwareengineering of critical systems OSSs.

(7) OSS::Management Practices: These are practices which are specifically aimed atthe management of resources, physical or otherwise, which are part of the OSS(e.g., history, version-tracking, inventory, etc.). Deciding which team should carryout which task is a management decision. Using round-the-clock productivityas a guideline for this decision is a management practice in global softwareengineering OSSs.

(8) OSS::Knowledge Repositories: These are the types of repositories found in litera-ture. Different types may be employed together to achieve a compounding effect,increasing capabilities. A wiki containing documents and codebases for a softwareengineering efforts is a Knowledge Repository in software engineering OSSs.

(9) Context::Barrier Mitigation Mechanisms: These mechanisms are used as part ofgovernance practices by the management of OSSs in order to mitigate barriers toits proper function. Risk engineering practices in software engineering OSSs arebarrier mitigation mechanisms.

(10) Context::Trust: These are the types of trust discussed in literature, which arespecifically relevant to OSSs. Privacy maintenance and measurement betweenoutsourced and outsourcer partners are indicative of the level of trust betweenpartners in software engineering OSSs.

(11) Context::Openness: This is the degree to which the context of an OSS and the OSSitself are open to information transfer and interchange. Information interchangebetween external communities and the software engineering project team in aproject is its degree of openness (e.g., fully open in Open-Source communities).

(12) Context::Changes: These are events which alter the context of an OSS and towhich the OSS reacts, either explicitly or implicitly. Test engineers turn-over isa context change in a software engineering OSS.

(13) Context::Boundary Crossing Practices: These are practices that OSSs adopt toexchange information among each other, through boundary objects, for example,their communication or interacting protocols count as boundary crossing practices.

(14) Context::Boundary Objects: These are the objects that can be used to actuateknowledge transfer between and across OSSs. Technological gateways such aspeople which explicitly interchange design information between two developmentsites are instances of boundary objects in software engineering OSSs.

(15) Context::Organizational Culture: These are the practices that an OSS uses in orderto pursue internal integration (i.e., governance of members, resources, etc.) as wellas external adaptations (actions on the context, knowledge transfer, etc.). Allowing1 hour sleep after lunch to increase software designers’ productivity in the after-noon is part of the organizational culture of Google’s (software engineering) OSS.

Given their number, all the possible attribute values we found are available online (fora link, see the Appendix).

In addition, Figure 4 captures attribute dependencies in a UML class diagram. Thediagram can be used to choose additional attributes as needed, based on the impact each

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 35: Base de datos EBSCO

3:24 D. A. Tamburri et al.

Fig. 4. OSS additional generic attributes’ dependencies.

attribute has on the other ones. On the diagram, the labels (following the same schemedefined in Section 4.1) allow traceability on raw data (available online). Specifically, 4out of 15 attributes are not dependent on any of the 15, and they appear at the bottomof Figure 4 in grey. In all 14 relations are present.

(1) “BoundaryObject” is dependent on “KnowledgeRepository” since the choice of aboundary object influences the presence (or operation of) a knowledge repository.For example, a boundary object can often be used as a knowledge repository (e.g.,a forum or wiki, such as in open-source forges).

(2) “KnowledgeRepository” is dependent on “GovernancePractices” since the presenceof a repository influences governance, such as by limiting which practices can beapplied and how.

(3) “GovernancePractice” is dependent on “CriticalSuccessFactors” of OSSs since gov-ernance is the prime activity through which an OSS’s success can be establishedand steered. For example, a corporation can adopt many-eyes phenomena to gov-ern the success of its best-practices development workgroup.

(4) “GovernancePractice” is also dependent on “Barrier” since many governance mech-anisms are specifically designed to tackle specific difficulties (e.g., redundancymatrix for employee skills in mission-critical systems).

(5) “GovernancePractice” is also dependent on “OrganizationalCulture” since manygovernance practices are usually part of the established organizational proceduresin a (large) corporation.

(6) “Barrier” is dependent on “OrganizationalCulture” since many barriers are intrin-sic of the domain or culture in certain organizations. For example, highly formalcommunities (e.g., formal methods) tend to exhibit “xenophobic” attitudes towardsradically new ideas.

(7) “Barrier” is also dependent on “BoundaryCrossing” and “ContextOpenness”. The(im-)possibility of exchanging information with external environments, or in fullyclosed contexts, can become a barrier. For example, consider security-critical

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 36: Base de datos EBSCO

Organizational Social Structures for Software Engineering 3:25

systems such as Signal-Intelligence processing networks (e.g., the Echelon sen-sor array): these may be left unaware of critical failures in certain procedures ordevices, given their “eyes-only” policies.

(8) “OrganizationalCulture” is dependent on “Trust” and vice versa. For example,the effectiveness of organizational procedures and culture in organizations oftendepend on the degree of “loyalty” (which is a type of Trust) from its employees.Conversely, the organizational stability of a company depends heavily on howmuch worthy of trust are its premises and promises.

(9) “Trust” is dependent on “ManagementPractices” since many of these practices arestudied specifically to increase and maintain the level of trust in employees (e.g.,informal leadership).

(10) “ManagementPractices” are dependent on “Lifecycle” since these practices steerand maintain the lifecycle of an OSS and, conversely, the healthy lifecycle ofan OSS depends heavily on the management practices adopted. For example, abusiness-critical system in need of strict time-to-market constraints needs specificmanagement practices such as extreme coordination, live integration, and testingcoordination, etc.

Remarkably, four attributes are not related to any, namely: “Barrier Mitigation Mech-anisms”, “OSS Context Changes”, “Goals”, and “Lifespan”. Investigating further in theliterature, we found they influence (either directly or indirectly) all remaining addi-tional attributes and therefore are not explicitly related to any.

5. USAGE, IMPLICATIONS AND THREATS TO VALIDITY

This section discusses our results, their exploitation, and their implications. Finally,we discuss threats to validity and mitigating mechanisms we adopted.

5.1. Discussion of Results

Several authors suggest that OSSs have three key functions in software engineering:representation of knowledge, management of roles among partners, and communica-tion patterns [Hannoun et al. 2000; Horling and Lesser 2004; Isern et al. 2011; Seiditaet al. 2010; Fox 1988]. We used these research segments as parameters to analyzemetatypes found. In addition, we discuss the practical exploitation of our results inthese research segments. Finally, we discuss the exploitation of our results within thesoftware engineering lifecycle. Table XVI summarizes our observations.

5.1.1. Exploitation in Knowledge Representation. In the domain of knowledge representa-tion and reasoning, the results have great potential for application. First, the 13 OSStypes we provide could be used as a map to build or monitor collective-intelligencesystems (e.g., ontology- and community-based crowd-sourcing applications [Lin et al.2009]). Because collective–intelligence is an emergent property of OSSs, practitionerscould study their OSS requirements and compare them to our definitions, to determinethe best fit OSS type. Moreover, many community-based approaches to ontology engi-neering and evolution (e.g., Hepp et al. [2006] and Debruyne et al. [2010]) could benefitfrom the OSS profiles we offer, since these could be used as reference models (e.g., the re-lations we have found among OSS types could be used as patterns to integrate multipleOSS types). In addition, the OSS definitions we provide could be used as a frameworkto evaluate the expressivity Tobies [2001] in community-support systems (e.g., socialnetworking technologies designed for specific community types Sintek et al. [2007]).

As a practical example of the previous discussions, consider the Forge OntologyProposal (FOP)7. Forges are collaborative communities of open-source software devel-opment. To develop FOP, the community around it could observe real-life forges (e.g.,

7https://forge.projet-coclico.org/plugins/mediawiki/wiki/wp2/index.php/Forge Ontology Proposal.

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 37: Base de datos EBSCO

3:26 D. A. Tamburri et al.

Table XVI. Metatypes Comparison according to Key Parameters from Literature

Knowledge Roles/Partnership CommunicationRepresentation Management Patterns

Com

mu

nit

ies

Communities are idealmechanisms to achieveshared-understanding. Thismakes them also ideal tostudy community-basedknowledge representations.Works in this direction canalready be found, e.g. in[Hepp et al. 2006] and[Debruyne et al. 2010],where authors adopt acommunity based approachto evolution of ontologies.

Communities are (explicitlyor implicitly) dependent onsituatedness [Soekijad et al.2004]. Consequently theyhave limited effectiveness inglobal partnerships (e.g. asa result of outsourcing)which force distance intothe equation [Herbsleb andMockus 2003]. This isconsistent with results from[Cusick and Prasad 2006].

Communities usually exhibitinformal communication, sharing,and focus on members. Thesecharacteristics makes them ideal tostudy communication patternswhich are efficient for particulargoals. For example, PSCs could bestudied to identify communicationpatterns which increase theireffectiveness in solving problems.Moreover, given their focus onmembers, communities could beobserved to design people-centriccommunication patterns (e.g. formore effective ambient assistedliving).

Net

wor

ks

Networks can bedifferentiated by level offormality, goals andmember-centricity. Studyingthese parameters and theirrelations, more accurate andexpressive knowledgerepresentations can bedevised. For example,semantic tools are now usedto support individuals andformal networks alike (e.g.[Oren et al. 2006; Monticoloand Gomes 2011]). Studyingthese networks, semanticwikis and ontologies couldbe adapted to thecommunity needs andattributes.

Networks are mechanismsthat connect people acrossmany distances (e.g. space,time, culture, biases, etc.).They are ideal mechanismsto support globalpartnerships, since theirattributes can becustomized to fit exactlywith partnerships’characteristics. Forexample, a network withhigh formality could connecta formal alliance betweenglobal offshoringpartners(e.g. as suggestedin [Cusick and Prasad2006]). Conversely informalnetworks could connectinformal cooperations ofprofessionals, e.g. in globalScrum-of-Scrums scenarios[Karolak 1999].

Our network profiles, are ideal tostudy communication patterns. Forexample,aided by social networksanalysis (SNA), scholars couldidentify “success” or “failure”communication patterns byobserving networks’ attributes andcompare them to the expectedgoals. This could lead to thedevelopment of network tuningstrategies, to mitigate identifiedbarriers. For example, anintegrated Scrum team is indeed anetwork of practice. Through SNA,scholars could represent thenetwork and verify the presence ofattributes critical for NoPs (e.g.communication openness, based onour definition). If these attributesare not evident, there is a problem.

Gro

ups

Groups are tightly knit andinterrelated sets of people[Hustad 2007; Cummings2004]. Hence, theirattributes restrict their usesto the study of knowledgerepresentations for specificdomain areas. For example,studying the IFIPWorkgroup 2.10 on softwarearchitecture (e.g sharedunderstanding, cohesionpractices, formality, etc.)practitioners could designadaptable knowledgerepresentation mechanismsfor software architectures.

Again, groups are commonlyassociated with tightness ofrelations. This makes themideal for governance ofcollocated people in globalpartnerships (e.g. groups ina single site, as suggested in[Cusick and Prasad 2006]).

The study of groups, theirattributes and relation could beuseful to identify successfulcommunication and collaborationpatterns. For example, managerscould profile their managed groupsand use the attributes as meters toassess the effectiveness ofgovernance strategies.

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 38: Base de datos EBSCO

Organizational Social Structures for Software Engineering 3:27

Table XVI. Continued

Knowledge Roles/Partnership CommunicationRepresentation Management Patterns

Tea

ms

Teams are instances ofgroups which aretime-bound and focused on aspecific objective (e.g. thedelivery of a product). Wefound that these attributesmake them unreliablesources of information forknowledge representationsince each instance isdifferent and cannot begeneralized.

Our literature comes frommany fields, includingsoftware engineering. Insoftware engineering,project-teams have very-welldefined connotation but areconfigured ad-hoc to match thedevelopment problem. A betterdefinition of teams would notbe beneficial for roles andpartnerships management,since teams still need to beconfigured ad-hoc, to matchthe development problem (e.g.to include offshoring).

A better definition of teams’attributes is beneficial to developcommunication andcollaboration patterns thatincrease project success rate. Forexample, studying the attributevalues that arise when projectteams and software process arecombined successfully, can leadto development of successfulcommunication andcollaboration patterns.

Fig. 5. OSS transitions.

SourceForge) and identify the communities present, using our OSS models as a refer-ence. By putting together the type requirements and attributes of communities found,the FOP initiative could develop a collaborative ontology based on empirical evidence.In addition, FOP authors could validate its applicability in practice by comparing itto practical examples of the communities it was inspired from (e.g., a CoP or NoP insoftware development).

5.1.2. Exploitation in Partnerships or Communication Management. We analyzed further inthe primary studies the patterns introduced in Section 4.3. We found that their usagecan be twofold.

The first usage helps companies understanding, triggering, and supporting orga-nizational change (e.g., new partnerships or change in role). We observed that thespecialization association between OSS types corresponds to a transition of a certainOSS (e.g., a CoP) into a more specialized OSS (e.g., an NoP). For example, connectingmultiple local CoPs on a distributed network corresponds to adding the attribute “ge-olocalization”, and hence specializing CoPs into an NoP. This indicates some kind oftransition of an existing OSS into another type, possibly due to evolving organizations,partnerships, business strategies, or customers. We made the same observation forthe remaining two patterns: in each pattern the aggregating type transforms into thespecializing one.

The three patterns can be combined together, since they involve a common type, CoPs.The diagram in Figure 5 merges the transitions together (the transition label indicatesthe number of times it was found in literature). We observed a practical example of this

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 39: Base de datos EBSCO

3:28 D. A. Tamburri et al.

transition system in one of our industrial research partners, the Architecture Group atLogica8, which includes practicing software architects from diverse project teams.

The second usage of the OSS patterns in Figure 3 concerns how, based on empiricalevidence, OSS types can be combined together effectively. We found the aggregationpatterns in Figure 3 can be both within and across organizations. The former is whenspecific community types (e.g., CoPs) are combined with other types (e.g., NoPs) toincrease communication and organizational efficiency. The latter is when organizationscombine (pieces of) communities (e.g., indoor software architecture CoPs) to partneron specific projects. Therefore, a company can identify its OSS type (by analyzingits attributes) and find (through the patterns) other OSSs that can be aggregatedeffectively.

For example, Philips NL9 was a pioneer of collaboration with open-source (so-cial)NoPs, but the process was long and costly [Torkkeli et al. 2007]. After the commu-nity was up and running, Philips NL commonly used combined expertise from its inter-nal CoPs and its digital NoP to join collaborations on specific projects. In Philips NL’sscenario, three communities are involved: (a) the indoor software development com-munity (CoP); (b) the open-source community sponsored by Philips (NoP); and (c) theproject teams involved in cross-organizational collaborative projects (PTs). Philips’ sce-nario matches pattern “(c)”: using details of key types involved. Philips NL could haveplanned the “expansion” pattern “(c)”, to speed up the process, make it explicit, orsupport it.

5.1.3. Exploitation in Lifecycle Management. There are many scenarios in which our resultscan be used during the software lifecycle.

A first possibility is during process planning, where practitioners can compare thedevelopment problem with OSS types. For example, one of the goals could be to identifythe best-fit OSS.

In addition, developing software practitioners can analyze the development OSS.One of the goals could be to understand and possibly correct the OSS status (attributevalues).

The way in which OSS types can be defined suggested to us a sequential process thatcan be used in both scenarios. In step 1, practitioners identify the type which best fitsthe development problem. In steps 2 and 3, defining attributes and dependency graphsare used to tailor the type to fit the development problem. In step 4, the types meta-model is used to understand relations for selected types. In step 5, additional genericattributes are used to enrich the OSS to better support its domain (e.g., using gover-nance to overtake certain domain barriers). Finally, in step 6, the transition system isused to monitor the OSS type, predicting and supporting its state changes. For exam-ple, in the first scenario, let us suppose that organization X needs to develop a large andconcurrent ballistic-control system (i.e., a critical system). X requires a formal “Member-sOfficialStatus”, since personnel background should be certified for security reasons.In step 1, the best-fit OSS type for X’s efforts is FN since its “MembersOfficialStatus” isfixed to a formal value. In step 2, the ballistic-control FN is characterized with anotherattribute, “CreationProcess”. Official status of FN members is formally acknowledgedand therefore the creation process should respond to precise and formal semantics. Or-ganization X should support these semantics explicitly (e.g., by specific ad hoc ontologiesbacked by automated security checks on personal and professional background). In step3, management of ballistic-control FN uses FNs’ dependency graph to choose the appro-priate “KnowledgeActivity”, “DegreeOfFormalization”, and “ManagementPractices”.

8http://www.logica.com.9http://www.philips.nl.

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 40: Base de datos EBSCO

Organizational Social Structures for Software Engineering 3:29

Fig. 6. OSS classification meter.

As a consequence of “MembersOfficialStatus” , some values might not be applicable. Forexample, the ballistic-control system should be engineered through self-contained andindependent teams in the FN. It follows that the “KnowledgeActivity” attribute is set tostatus-only, hence no cross-team collaboration is possible, and the project must be de-veloped in self-contained work units. In step 4, management of ballistic-control FN usestypes relations to exclude informal networks from the development loop. Informal net-works would constitute a security risk and are organizationally incompatible with infor-mal networks. In step 5, management of the ballistic-control FN further enrich it for itsdomain selecting emotional management as an additional governance practice. Step 6 isnot applicable since we have not found any patterns linking FNs to other types. Never-theless, during the software process, the ballistic-control FN could accidentally move toanother type in the transition system (e.g., a CoP, due to changing military policies). TheOSS transition system becomes useful to understand and support further type shifts.

5.2. Observations

Two key observations were made on our results. The text following the observationsexplains their implications.

First Observation. Project teams are well-defined OSS types, usually tailored forspecific projects, but act according to clear-cut operational dynamics. According to ourresults, project teams are merely one of 13 OSSs observed in literature. It is remarkablehow traditional software engineering has considered project teams almost exclusively.The remaining 12 flavors were left relatively unexplored for the purpose of building soft-ware. Additional research should be carried out in the usage of the remaining 12 typesfor software engineering. Indeed some of them could prove very efficient for specific do-mains. For example, the dynamics within problem-solving communities and their focuson solving problems for immediate benefits could be efficient to develop safety-criticalsystems (e.g., health-care software) since these systems should maintain the benefitand well-being of their users. The OSS definitions and the selection mechanisms wehave provided could be used as a map to explore this uncharted segment of softwareengineering.

Second Observation. Project teams and social networks are nongeneralizable super-types (see Figure 2). These two types sit at the base of organizational practice. Basedon how project teams and social networks are defined in literature, we observed thatthey represent the two extremes of an ideal classification meter for organizationalsocial structures in software engineering. We have produced this ideal meter in theform of a two-axis diagram in Figure 6. The figure shows Project Teams (PTs) at thebottom left, since they typically operate through divide-et-impera approaches, as part

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 41: Base de datos EBSCO

3:30 D. A. Tamburri et al.

of closed-source software development projects. Moreover, the figure shows Social Net-works (SNs) at the top right, since they are typically used explicitly in open-sourcecommunities to develop software (e.g., projects part of the Linux foundation10) also,they are evolutionary in nature, that is, follow a more Darwinian, from-fish-to-frogapproach, in which the collective pulls autonomously the single engineering tasks andthe fittest contribution “survives”.

Two implications follow this observation. First, when approaching software engi-neering, practitioners immediately associate project teams to the process of developingsoftware. To our knowledge, no structured process nor software engineering approachhas been suggested using explicitly social networks as the basic unit of productionfor software, in spite of their successful use in OSS research and practice. Additionalresearch should be carried out in “social-network-centric software engineering”. In thisfield, domain-specific social networks are used to carry out software engineering, ratherthan project teams.

Second, our data was insufficient to classify the remaining 11 OSS types throughthe diagram in Figure 6. Such a classification could uncover the underlying relationbetween the two dimensions in the diagram, for example, to understand how far thesetwo dimensions can benefit from each other. Also, it would be interesting to investigate(in organizational research) which areas on the diagram have been charted so far andwith which benefits, for example, to support their further exploration for softwareengineering. The diagram in Figure 6 as well as the results presented in Section 4could be used to bootstrap such research.

5.3. Threats to validity

Based on the taxonomy in Wohlin et al. [2000], there are four potential validity threatareas, namely: external, construct, internal, and conclusion validity.

External Validity concerns the applicability of the results in a more general context.Since our primary studies are obtained from a large extent of disciplines, our results andobservations might be only partially applicable to the software engineering discipline.This may threaten external validity. To strengthen external validity, we organizedfeedback sessions. We analyzed follow-up discussions and used this qualitative datato fine-tune our research methods and applicability of our results. In addition, weprepared a bundle of all the raw data, all models drawn, all tables, and everythingthat we used to compose this article so as to make it available to all who might wantto further their understanding on our data (for links see the Appendix). We hope thatthis can help in making the results and our observations more explicit and applicablein practice.

Construct Validity and Internal Validity concern the generalizability of the con-structs under study, as well as the methods used to study and analyze data (e.g., thetypes of bias involved). To mitigate these threats, we adopted formal grounded-theorymethods; these were conceived to avoid bias by construction [van Niekerk and Roode2009; Corbin and Strauss 1990; Haig 1995]. To ensure internal and construct validityeven further, the initial set of codes for grounded theory was developed by an exter-nal researcher and checked against another external reviewer who is not among theauthors and not belonging to the software engineering field. In addition we appliedgrounded theory in two rounds: (a) first the primary studies were split across a groupof students, to apply grounded theory; (b) in the second round one of the authors reexe-cuted a blind grounded theory on the full primary studies set. When both rounds werefinished, both grounded theories were analyzed evenly to construct a unique theory.

10http://www.linuxfoundation.org/.

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 42: Base de datos EBSCO

Organizational Social Structures for Software Engineering 3:31

When disagreement between the two samples was found, a session was organized withstudents, researchers, and supervisors to examine the samples and check them againstliterature.

“Conclusion Validity” concerns the degree to which our conclusions are reasonablebased on our data. The logical reasoning behind our conclusions is dictated by soundanalysis of the data through grounded theory and other analysis methods which tryand construct theory from data rather than confirming initial hypotheses, as explainedin Haig [1995] and Schreiber and Carley [2004]. Moreover, all the conclusions in thisarticle were drawn by three researchers and double-checked against data, primarypapers, or related studies.

6. CONCLUSIONS

We conducted a systematic literature review based on grounded theory, to obtain adefinition and comparison of OSSs. With reference to Section 3.1, we set out to answerthe following research questions.

(1) What types of OSSs can be distinguished in literature?(2) What attributes can be identified for each type?

We answered our first research question with 13 OSS types elicited from litera-ture. For each type we provided: (a) a single differentiating attribute; (b) a dependencygraph; (c) a set of attributes unique to that type. Also, we compared each type withothers. Moreover, we identified 15 additional attributes (and their dependencies) com-monly/generally applicable to all types. This last result, in addition to the definingattributes in every type, answer research question 2. Finally, we provided an OSStransition system (between key OSS types) made of patterns through which OSS typescan be combined together.

Our evidence and discussions support two key conclusions. On one hand, we observedhow project teams are not the only OSS that could be used to develop software. Weconclude that additional research should be carried out in exploring other OSSs forthe purpose of building software. Our results could be used as a starting point for suchresearch.

On the other hand, in this article we have described how to decide on the OSS bestfitting a certain development problem (e.g., critical systems, as discussed in Section 5)rather than selecting project teams by default. Consequently, this article might serveas the first attempt to consider engineering software with an OSS-aware approach,that is, selecting and supporting the organizational structure which fits exactly withthe software development problem. Indeed, the primordial stages of this discipline canbe found in the emergence of unconventional OSSs such as described in Yamauchiet al. [2000], Northrop et al. [2006] or, even more recently, in Kazman and Chen [2010].In presence of such emergent OSSs, traditional software engineering is no longer ap-plicable [Northrop et al. 2006]. We argue that the new models, presented in reply ofthis software engineering disability (e.g., the “Metropolis” model in Kazman and Chen[2010]) are hybrids of our 13 OSS flavors (e.g., SNs and NoPs, in the case of “Metropo-lis”). Therefore we conclude that this article serves as a first rudimental compass in thisunexplored research segment, namely “OSS-aware software engineering”.

APPENDIX

ELECTRONIC APPENDIX

The electronic appendix for this article can be accessed in the ACM Digital Library.

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 43: Base de datos EBSCO

3:32 D. A. Tamburri et al.

ACKNOWLEDGMENTS

The authors would like to thank Mirella Sangiovanni for the care in correcting this document and theanonymous reviewers for their invaluable contributions.

REFERENCES

ALLEN, J., JAMES, A. D., AND GAMLEN, P. 2007. Formal versus informal knowledge networks in r&d: A casestudy using social network analysis. Res. Des. Manag. 37, 3, 179–196.

ANDRIESSEN, J. H. E. 2005. Archetypes of knowledge communities. In Communities and Technologies, Springer,191–213.

ARMBRUST, M., FOX, A., GRIFFITH, R., JOSEPH, A. D., KATZ, R., KONWINSKI, A., LEE, G., PATTERSON, D., RABKIN, A.,STOICA, I., AND ZAHARIA, M. 2010. A view of cloud computing. Comm. ACM 53, 50–58.

BIRD, C., NAGAPPAN, N., DEVANBU, P., GALL, H., AND MURPHY, B. 2009a. Does distributed development affectsoftware quality? An empirical case study of windows vista. In Proceedings of the 31st InternationalConference on Software Engineering (ICSE’09). 518–528.

BIRD, C., NAGAPPAN, N., GALL, H., MURPHY, B., AND DEVANBU, P. 2009b. Putting it all together: Using socio-technical networks to predict failures. In Proceedings of the 20th International Symposium on SoftwareReliability Engineering (ISSRE’09). 9–119.

BLANKENSHIP, N. AND RUONA, N. 2009. Exploring knowledge sharing in social structures: Potential contributionsto an overall knowledge management strategy. Adv. Devel. Human Res. 11, 3, 290.

BOGENRIEDER, I. AND NOOTEBOOM, B. 2004. Learning groups: What types are there? A theoretical analysis andan empirical study in a consultancy firm. Organization Stud. 25, 2, 287–313.

CATALDO, M., HERBSLEB, J. D., AND CARLEY, K. M. 2008. Socio-technical congruence: A framework for assessingthe impact of technical and work dependencies on software development productivity. In Proceedingsof the 2nd ACM-IEEE International Symposium on Empirical Software Engineering and Measurement(ESEM’08). ACM Press, New York, 2–11.

CATALDO, M., MOCKUS, A., ROBERTS, J. A., AND HERBSLEB, J. D. 2009. Software dependencies, work dependencies,and their impact on failures. IEEE Trans. Softw. Engin. 35, 6, 864–878.

CATALDO, M. AND NAMBIAR, S. 2009a. On the relationship between process maturity and geographic distribu-tion: An empirical analysis of their impact on software quality. In Proceedings of the 7th Joint meeting ofthe European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundationsof Software Engineering (ESEC/FSE’09). ACM Press, New York, 101–110.

CATALDO, M. AND NAMBIAR, S. 2009b. Quality in global software development projects: A closer look at the roleof distribution. In Proceedings of the 4th IEEE International Conference on Global Software Engineering(ICGSE’09). 163–172.

CATALDO, M. AND NAMBIAR, S. 2012. The impact of geographic distribution and the nature of technical couplingon the quality of global software development projects. J. Softw. Evolut. Process. 24, 2, 153–168.

CHARD, K., CATON, S., RANA, O., AND BUBENDORFER, K. 2010. Social cloud: Cloud computing in social networks.In Proceedings of the 3rd International Conference on Cloud Computing.

CORBIN, J. AND STRAUSS, A. 1990. Grounded theory research: Procedures, canons, and evaluative criteria.Qualitative Sociol. 13, 1, 3–21.

CUMMINGS, J. N. 2004. Work groups, structural diversity, and knowledge sharing in a global organization.Manag. Sci. 50, 3, 352–.

CUSICK, J. J. AND PRASAD, A. 2006. A practical management and engineering approach to offshore collaboration.IEEE Softw. 23, 5, 20–29.

DATTA, S., SINDHGATTA, R., AND SENGUPTA, B. 2011. Evolution of developer collaboration on the jazz platform:A study of a large scale agile project. In Proceedings of the 4th India Software Engineering Conference(ISEC’11). ACM Press, New York, 21–30.

DEBRUYNE, C., REUL, Q., AND MEERSMAN, R. 2010. Gospl: Grounding ontologies with social processes andnatural language. In Proceedings of the 7th International Conference on Information Technology: NewGenerations (ITNG’10). S. Latifi, Ed., 1255–1256.

DICKINSON, A. M. 2002. Knowledge sharing in cyberspace: Virtual knowledge communities. In Proceed-ings of the 4th International Conference on Practical Aspects of Knowledge Management (PAKM’02).457–471.

FERSHTMAN, C. AND GANDAL, N. 2008. Microstructure of collaboration: The ‘social network’ of open sourcesoftware. CEPR Discussion Papers 6789, C.E.P.R. Discussion Papers.

FOX, M. S. 1988. An organizational view of distributed systems. In Distributed Artificial Intelligence, MorganKaufmann Publishers, 140–150.

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 44: Base de datos EBSCO

Organizational Social Structures for Software Engineering 3:33

HAIG, B. 1995. Grounded theory as scientific method. Philosophy of education (on-line). http://jan.ucc.nau.edu/∼pms/cj355/readings/Haig%20Grounded%20Theory%20as%20Scientific%20Method.pdf.

HANNOUN, M., BOISSIER, O., SICHMAN, J. S., AND SAYETTAT, C. 2000. Moise: An organizational model for multi-agent systems. In Proceedings of the 7th International Ibero-American Conference and the 15th BrazilianSymposium on Advances in Artificial Intelligence (IBERAMIA-SBIA’00). M. C. Monard and J. S. Sichman,Eds., Lecture Notes in Computer Science, vol. 1952, Springer, 156–165.

HANSEN, M. 1999. The search-transfer problem: The role of weak ties in sharing knowledge across organizationsubunits. Admin. Sci. Quart. 44, 1, 82–111.

HATALA, J.-P. AND LUTTA, J. G. 2009. Managing information sharing within an organizational setting: A socialnetwork perspective. Perform. Improv. Quart. 21, 4, 5–33.

HEPP, M., BACHLECHNER, D., AND SIORPAES, K. 2006. Ontowiki: Community-driven ontology engineering andontology usage based on wikis. In Proceedings of the International Symposium on Wikis (WikiSym’06).ACM Press, New York, 143–144.

HERBSLEB, J. AND MOCKUS, A. 2003. An empirical study of speed and communication in globally distributedsoftware development. IEEE Trans. Softw. Engin. 29, 6, 481–94.

HORLING, B. AND LESSER, V. 2004. A survey of multi-agent organizational paradigms. Knowl. Engin. Rev. 19,4, 281–316.

HUSTAD, E. 2007. Managing structural diversity: The case of boundary spanning networks. Electron. J. Knowl.Manag. 5, 4, 399–409.

HUSTAD, E. 2010. Exploring knowledge work practices and evolution in distributed networks of practice.Electron. J. Knowl. Manag. 8, 1, 69–78.

ISERN, D., SNCHEZ, D., AND MORENO, A. 2011. Organizational structures supported by agent-oriented method-ologies. J. Syst. Softw. 84, 2, 169–184.

JETTER, M., SATZGER, G., AND NEUS, A. 2009. Technological innovation and its impact on business model,organization and corporate culture - Ibm’s transformation into a globally integrated, service-orientedenterprise. Bus. Inf. Syst. Engin. 1, 1, 37–45.

JOHNSEN, E. C. 1985. Network macrostructure models for the davis-leinhardt set of empirical sociomatrices.Social Netw. 7, 3, 203–224.

KAROLAK, D. W. 1999. Global Software Development: Managing Virtual Teams and Environments, 1st ed.IEEE Computer Society Press, Los Alamitos, CA.

KAZMAN, R. AND CHEN, H.-M. 2010. The metropolis model and its implications for the engineering of soft-ware ecosystems. In Proceedings of the FSE/SDP Workshop on Future of Software Engineering Re-search(FoSER’10), G.-C. Roman and K. J. Sullivan, Eds., ACM Press, New York, 187–190.

KITCHENHAM, B., PEARLBRERETON, O., BUDGEN, D., TURNER, M., BAILEY, J., AND LINKMAN, S. 2008. Systematicliterature reviews in software engineering a systematic literature review. Inf. Softw. Technol. 51, 1,7–15.

KLEIN, J. H., CONNELL, N. A. D., AND MEYER, E. 2005. Knowledge characteristics of communities of practice.Knowl. Manag. Res. Pract. 3, 2, 106–114.

KSHETRI, N. 2010. Cloud computing in developing economies. IEEE Comput. 43, 10, 47–55.KWAN, I., SCHROTER, A., AND DAMIAN, D. 2011. Does socio-technical congruence have an effect on software build

success? A study of coordination in a software project. IEEE Trans. Softw. Engin. 37, 3, 307–324.LANGHORNE, R. 2001. The Coming of Globalization: Its Evolution and Contemporary Consequences. Palgrave,

London and New York.LEE, S. H. AND WILLIAMS, C. 2007. Dispersed entrepreneurship within multinational corporations: A commu-

nity perspective. J. World Bus. 42, 4, 505–519.LIBRARYA, U. 2010. Bibliometrics - An introduction. Ph.D. thesis, HinweisŁ.LIN, H., DAVIS, J., AND ZHOU, Y. 2009. Integration of computational and crowd-sourcing methods for ontology

extraction. In Proceedings of the 5th International Conference on Semantics, Knowledge and Grid. 306–309.

LINDKVIST, L. 2005. Knowledge communities and knowledge collectivities: A typology of knowledge work ingroups. J. Manag. Stud. 42, 6, 1189–1210.

LYYTINEN, K., MATHIASSEN, L., AND ROPPONEN, J. 1998. Attention shaping and software risk - A categoricalanalysis of four classical risk management approaches. Inf. Syst. Res. 9, 3, 233–255.

MARTINELLI, A. 2007. Evolution from world system to world society? World Futures 63, 5–6, 425–442.MENEELY, A. AND WILLIAMS, L. A. 2009. Secure open source collaboration: An empirical study of linus’ law.

In Proceedings of the ACM Conference on Computer and Communications Security. E. Al-Shaer, S. Jha,and A. D. Keromytis, Eds., ACM Press, New York, 453–462.

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 45: Base de datos EBSCO

3:34 D. A. Tamburri et al.

MENTZAS, G., APOSTOLOU, D., KAFENTZIS, K., AND GEORGOLIOS, P. 2006. Inter-organizational networks for knowl-edge sharing and trading. Inf. Technol. Manag. 7, 4, 259–276.

MONTICOLO, D. AND GOMES, S. 2011. Wikidesign: A semantic wiki to evaluate collaborative knowledge. Int. J.eCollaboration 7, 3, 31–42.

NAGAPPAN, N., MURPHY, B., AND BASILI, V. 2008. The influence of organizational structure on software quality:An empirical case study. In Proceedings of the International Conference on Software Engineering. 521–530.

NORTHROP, L., FEILER, P., GABRIEL, R. P., GOODENOUGH, J., LINGER, R., LONGSTAFF, T., KAZMAN, R., KLEIN,M., SCHMIDT, D., SULLIVAN, K., AND WALLNAU, K. 2006. Ultra-large-scale systems – The software chal-lenge of the future. Tech. rep., Software Engineering Institute, Carnegie Mellon. June. http://www.sei.cmu.edu/library/assets/ULS Book20062.pdf.

ONIONS, P. E. W. 2006. Grounded theory applications in reviewing knowledge management literature. InProceedings of the Leeds Metropolitan University Innovation North Research Conference 1962. 1–20.

OREN, E., VLKEL, M., BRESLIN, J., AND DECKER, S. 2006. Semantic wikis for personal knowledge management.In Database and Expert Systems Applications, S. Bressan, J. Kng, and R. Wagner, Eds., Lecture Notesin Computer Science, vol. 4080, Springer, 509–518.

PARREIRAS, F. S., DE OLIVEIRA, E., SILVA, A. B., BASTOS, J. S. Y., AND BRANDAO, W. C. 2004. Information andcooperation in the free/open source software development communities: An overview of the brazilianscenario. In Proceedings of the 5th National Meeting on Education and Research on Information Science(V CINFORM’04).

PINZGER, M., NAGAPPAN, N., AND MURPHY, B. 2008. Can developer-module networks predict failures? In Pro-ceedings of the 16th ACM SIGSOFT International Symposium on Foundations of Software Engineering(SIGSOFT’08/FSE). ACM Press, New York, 2–12.

RICHARDSON, I., CASEY, V., BURTON, J., AND MCCAFFERY, F. 2010. Global software engineering: A software pro-cess approach. In Collaborative Software Engineering, I. Mistrik, J. Grundy, A. van der Hoek, and J.Whitehead, Eds., Springer, 35–56.

RILEY, C. AND DOLLING, P. 2011. NASA Apollo 11 Manual: An Insight into the Hardware from the First MannedMission to Land on the Moon, 1st ed. Haynes Online. http://www.haynes.com/products/productID/563.

ROSSO, C. D. 2009. Comprehend and analyze knowledge networks to improve software evolution. J. Softw.Maint. Evolut. Res. Pract. 21, 3, 189–215.

RUIKAR, N., KOSKELA, N., AND SEXTON, N. 2009. Communities of practice in construction case study organisa-tions: Questions and insights. Constr. Innov. Inf. Process Manag. 9, 4, 434–448.

RUUSKA, I. AND VARTIAINEN, M. 2003. Communities and other social structures for knowledge sharing: A casestudy in an internet consultancy company. In Communities and Technologies. Kluwer, 163–183.

SCHREIBER, C. AND CARLEY, K. M. 2004. Going beyond the data: Empirical validation leading to groundedtheory. Comput. Math. Organization Theory 10, 2, 155–164.

SEIDITA, V., COSSENTINO, M., HILAIRE, V., GAUD, N., GALLAND, S., KOUKAM, A., AND GAGLIO, S. 2010. The metamodel:A starting point for design processes construction. Int. J. Softw. Engin. Knowl. Engin. 20, 4, 575–608.

SINTEK, M., VAN ELST, L., SCERRI, S., AND HANDSCHUH, S. 2007. Distributed knowledge representation on thesocial semantic desktop: Named graphs, views and roles in nrl. In Proceedings of the 4th EuropeanSemantic Web Conference. 594–608.

SOEKIJAD, M., INTVELD, M. A. A. H., AND ENSERINK, B. 2004. Learning and knowledge processes in inter-organizational communities of practice. Knowl. Process Manag. 11, 1, 3–12.

TAMBURRI, D. A., DI NITTO, E., LAGO, P., AND VAN VLIET, H. On the nature of the gse organizational social struc-ture: An empirical study. In Proceedings of the 7th IEEE International Conference on Global SoftwareEngineering.

TOBIES, S. 2001. Complexity results and practical algorithms for logics in knowledge representation. CoRRcs.LO/0106031. http://cds.cern.ch/record/504396?ln=en.

TORKKELI, M., VISKARI, S., AND SALMI, P. 2007. Implementing open innovation in large corporations.http://www.bibsonomy.org/bibtex/244f0ce478ae3df0644368637ba17ebea/luise k.

TOSUN, A., TURHAN, B., AND BENER, A. 2009. Validation of network measures as indicators of defective modulesin software systems. In Proceedings of the 5th International Conference on Predictor Models in SoftwareEngineering (PROMISE’09). ACM Press, New York, 5:1–5:9.

TURHAN, B., MENZIES, T., BENER, A. B., AND DI STEFANO, J. 2009. On the relative value of cross-company andwithin-company data for defect prediction. Empirical Softw. Engin. 14, 5, 540–578.

UZZI, B. 1997. Social structure and competition in interfirm networks: The paradox of embeddedness. Admin.Sci. Quart. 42, 1, 35–67.

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 46: Base de datos EBSCO

Organizational Social Structures for Software Engineering 3:35

VAN NIEKERK, J. C. AND ROODE, J. D. 2009. Glaserian and straussian grounded theory: Similar or completelydifferent? In Proceedings of the Annual Research Conference of the South African Institute of ComputerScientists and Information Technologists (SAICSIT’09), B. Dwolatzky, J. Cohen, and S. Hazelhurst, Eds.,ACM Press, New York, 96–103.

VITHESSONTHI, N. 2010. Knowledge sharing, social networks and organizational transformation. Bus. Rev.Cambridge 15, 2, 99–109.

WENGER, E., MCDERMOTT, R. A., AND SNYDER, W. 2002. Cultivating Communities of Practice: A Guide to Man-aging Knowledge. Harvard Business School Publishing. http://hbswk.hbs.edu/archive/2855.html.

WENGER, E. C. AND SNYDER, W. M. 2000. Communities of practice: The organizational frontier. Harvard Bus.Rev. 78, 1, 139.

WITTEN, B., LANDWEHR, C., AND CALOYANNIDES, M. 2001. Does open source improve system security? IEEESoftw. 18, 5, 57–61.

WOHLIN, C., RUNESON, P., HOST, M., OHLSSON, M. C., REGNELL, B., AND WESSLE, N, A. 2000. Experimentation inSoftware Engineering: An Introduction. Kluwer Academic Publishers.

YAMAUCHI, Y., YOKOZAWA, M., SHINOHARA, T., AND ISHIDA, T. 2000. Collaboration with lean media: How open-source software succeeds. In Proceedings of the ACM Conference on Computer Supported CooperativeWork (CSCW’00). ACM Press, New York, 329–338.

Received March 2012; revised October 2012; accepted October 2012

ACM Computing Surveys, Vol. 46, No. 1, Article 3, Publication date: October 2013.

Page 47: Base de datos EBSCO

Copyright of ACM Computing Surveys is the property of Association for ComputingMachinery and its content may not be copied or emailed to multiple sites or posted to alistserv without the copyright holder's express written permission. However, users may print,download, or email articles for individual use.

Page 48: Base de datos EBSCO

Model Based Systems Engineering withDepartment of Defense ArchitecturalFrameworkChris Piaszczyk

MODEL BASED SYSTEMS ENGINEERING WITH DoDAF

Received 15 June 2009; Revised 19 July 2010; Accepted 18 October 2010, after one or more revisionsPublished online 16 February 2011 in Wiley Online Library (wileyonlinelibrary.com). DOI 10.1002/sys.20180

ABSTRACT

This paper presents a methodology for Model Based Systems Engineering (MBSE) utilizing the guidelinesprovided by the Department of Defense Architectural Framework (DoDAF). MBSE focuses the systemengineering process on system modeling. Thus, requirements derivation, system design, integration,verification, and validation activities all center around the model of the system. System models can bebuilt in many ways. This paper presents an approach using DoDAF. Visual accessibility of the DoDAF viewsfacilitates full participation by all system stakeholders, including the customers, developers, and imple-menters, and enables the necessary dialogue. The approach promotes communications among themembers of the development team and speeds up the iterations of the systems engineering process. Thepaper illustrates the utility of the DoDAF artifacts by means of a simplified but comprehensive, easy tofollow, illustrative example. © 2011 Wiley Periodicals, Inc. Syst Eng 14: 305–326, 2011

Key words: Model Based Systems Engineering; DoDAF; architectures; requirements analysis; systemdesign; integration; verification; validation

1. INTRODUCTION TO MODEL BASEDSYSTEMS ENGINEERING

A recent Monterey Naval Postgraduate School thesis [Clark,Howell, and Wilson, 2007, p. 43] states:

Inadequate requirements generation allows ambiguity to af-fect all follow-on activities. This usually shows up duringdetail design and requires additional engineering and possiblyprogrammatic effort to unravel the missing or incorrect infor-mation. Once the ship construction contract is signed, anychange in contract documentation or government/vendor fur-nished documentation is considered out of scope and usuallyrequires some type of cost adjustment.

Requirements analysis, one of the core systems engineer-ing activities, continues to pose significant challenges. Whilevery difficult, it is also extremely important because require-ments shortcomings cause schedule delays and cost overrunsin many projects. Clearly, improvements to methods andpractice of developing an unambiguous, consistent, correctand complete set of requirements are still needed.

In recent years, various approaches jointly referred to asModel Based Systems Engineering (MBSE) have been pro-posed to help address this difficult problem. Rather than justa new methodology, MBSE is a change of paradigm awayfrom requirements written using only a word processor. In thisnew paradigm requirements development is no longer the taskfor a group of subject matter experts sitting around the tablein a locked conference room or, even worse, each expertworking in his own cubicle in isolation. Although in the oldparadigm each of the experts may have had a personal vision

Systems Engineering Vol 14, No. 3, 2011© 2011 Wiley Periodicals, Inc.

305

Regular Paper

Page 49: Base de datos EBSCO

of the system model, MBSE brings the system model to thecenter stage for use by all stakeholders together.

A model is an idealization of the real-world emphasizingcertain specific characteristics relevant to the problem at hand.Modeling has been the backbone of the natural sciences forages. The value of modeling in systems engineering canperhaps be best illustrated by means of a well-known allegory.The requirements discovery process is comparable to theproverbial examination of an elephant by a group of blindmen. Each one describes his own view of the animal, and, aswe all know, these can be drastically different. However, if amodel of the elephant is available, the individual views fil-tered through the perspective of the model have a betterchance of being representative of reality. So it is with require-ments. Each stakeholder has his or her own view of the systemthey require and the systems engineer’s job is to assimilate allthese into one coherent representation, or model, of reality.

The systems model is a representation of the desired out-come of the systems engineering design process. Like manyother aspects of systems engineering, modeling is simplycommon sense. It is natural to imagine what the system willlook like before committing to its construction. These days asuitable high level system model can be built with a computertool that allows the collaborating team to “visualize” the entiresystem and the surrounding environment. This includes thecustomers (originating stakeholders) who can now moreclearly “see” their vision. Thus, modeling pulls the process ofsystem validation towards the beginning of the project.Clearly, we would rather know the answer to the question “Arewe building the right system?” as soon as possible. Once weknow that we are building the right system, “all” we need todo is apply the verification process to make sure that “we arebuilding the system right.”

Although overriding in its importance, the customers’ vi-sion expressed through the originating requirements is seldomclear. The exact nature of the system to build is revealedthrough iterations of requirements analyses and system designrefinements that have to continue until all stakeholders’ re-quirements are satisfied. Through such iteration loops, therequirements are being defined at the same time as the systemitself. The requirements changes experienced during theseiterations can be very dynamic. By the time a set of require-ments is ready for final approval, it may be necessary to startover. This is especially true of large, complex projects. Stake-holders are affected by changing politics, diminishing budg-ets, and evolving technologies. The requirements change withstakeholders becoming more clearly aware of their needs andbecause of the external influences in our ever faster changingworld. To keep up with this dynamics, we need a systemsengineering methodology that can be turned around fastenough to fit within the “turning circle” of the bigger changes.Building a model may require some extra time investment upfront, but the return on this investment is tangible because theneeded changes are easier to manage with the help of a model.

Those of us who work as engineers will stop and say at thispoint: “Wait a minute! What’s new here? We always usedmodels.” True, systems engineering models are just anothertype of model. The “new” part is making the model the focalpoint of all the systems engineering activities. In contrast, the

focal point of the document centric requirements analysis isthe proverbial “Victorian novel” of requirements.

The recent “Systems Engineering Vision 2020” [Crisp,2007], published by the International Council on SystemsEngineering (INCOSE), states that the systems engineeringis evolving from the document-centric to the model-centricapproach and the latter “is expected to replace the document-centric approach that has been practiced by systems engineersin years past and to become fully integrated into the systemsengineering process” (p. 15). INCOSE also recently spon-sored a survey of MBSE methodologies and the results aredescribed by Estefan [2008]. This survey report defines meth-odology as “…a collection of related processes, methods, andtools. A methodology is essentially a “recipe” and can bethought of as the application of related processes, methods,and tools to a class of problems that all have something incommon” (p. 1). The survey reviews, at a summary level,several MBSE methodologies including the IBM/Telelogic/i-Logix Harmony SE, INCOSE Object Oriented Systems En-gineering Method (OOSEM), IBM Rational Unified Processfor Systems Engineering (RUP SE), Vitech Model-BasedSystem Engineering Methodology, and JPL State Analysis(SA). Each of these methodologies provides a model-basedapproach for performing the three major steps of the systemsengineering process: requirements analysis, system func-tional analysis, and architectural design.

One objective of this paper is to present a novel MBSEmethodology utilizing the guidance offered by the Depart-ment of Defense Architectural Framework (DoDAF) views.The proposed approach begins with DoDAF’s OperationalView (OV). The OV consists of mission level/“enterprisebusiness process” models that extend the concept of UML usecases for modeling interactions between operators and thesystem. Extremely helpful in understanding human-systemintegration and the resulting requirements, the OV artifactsare a “natural” part of the DoDAF framework. Proper integra-tion of humans and human organizations with and within thecomplex systems we are designing today is a very importantfactor in system’s acceptance and a major contributor to thesystem’s overall success. Mission level “enterprise businessprocess” OV use case analyses treat the system as a “blackbox.” With the understanding derived from the OV analyses,DoDAF’s System View (SV) artifacts are then used for “whitebox” analyses of the inner workings of the system itself withincreasing level of depth. Depending on the way the systemis defined, the operators (actors) can be inside or outside thesystem boundary. Both situations can be addressed with OVand SV products. The methodology presented in this paperproduces a system model consisting of the OV and SV prod-ucts.

Despite the inroads made by the MBSE, the requirementsdocuments are probably not really going away any time soon.They are embedded deep in the current systems engineeringpractice in many companies. They are being used for devel-oping integration plans, test plans, etc. However, MBSE willmake a major contribution in streamlining the process ofrequirements generation. This brings us to the other majorobjective of this paper which is to show how to derive require-ments from the architectural models. Even if systems engi-neering uses models exclusively in the future, relating

306 PIASZCZYK

Systems Engineering DOI 10.1002/sys

Page 50: Base de datos EBSCO

requirements to models is important for now. Here, we showhow to use the OV products to formulate an associated set ofderived operational requirements. The SV products are usedto first build a functional model and to derive functionalrequirements. Subsequently, we build a system model andderive system requirements. The system model is the last levelof abstraction before the actual physical implementation ofthe system. Finally, we build the representation of the physicalmodel and physical requirements. Thus, we end up with fourperspective levels or architectural viewpoints. Each perspec-tive is associated with its own corresponding set of require-ments that can be used for integration and test planning.

The methodology presented in this paper is intended tohelp realize the vision defined in Crisp [2007] by tyingrequirements generation to the system model developed withDoDAF. Hierarchical nature of the DoDAF views allows fornatural scalability of this methodology. We believe the use-fulness of the methodology presented here extends to non-military applications despite the defense pedigree of theDoDAF because all systems, military and commercial alike,consist of people, hardware, and software. This paper doesnot intend to teach the application of DoDAF but only to useit as a guide for the process of constructing a system model.The methodology can be modified to work for any architec-tural framework because system model must be presentablewith a set of architectural artifacts, no matter which architec-tural framework it is derived with.

Harmony SE, OOSEM and RUP SE as well as severalother MBSE methodologies use the Object ManagementGroup’s (OMG) Systems Modeling Language (SysML)[OMG, 2006]. SysML has recently evolved out of UnifiedModeling Language (UML) [OMG, 2007]. UML has grownaround the ideas of Model Driven Architecture (MDA) ac-tively trying to address interoperability issues in automationof enterprise information systems with a platform inde-pendent language such as UML. Use of UML allows defininga Platform Independent Model (PIM) that can be translatedinto a Platform Specific Model (PSM) for any desired com-puting platform. Executable UML automates this translation.The success of this philosophy of “layered abstractions” hasbeen recognized by the wider systems engineering commu-nity.

MBSE can also be implemented in the graphical languagecalled Integration DEfinition for Function modeling (IDEF)[NIST, 1993]. IDEF is a language originated by Douglas T.Ross and SofTech, Inc., based on the Structured Analysis andDesign Technique (SADT). For more information on SADT,the reader is referred to Yourdon [2010], DeMarco [1979], andYourdon and Constantine [1975]. The choice of SysML vs.IDEF is left entirely to reader’s preferences. Neither we norDoDAF are recommending one over the other.

2. ENTERPRISE ARCHITECTURES ANDARCHITECTURAL FRAMEWORKS

The word “architecture” typically brings to mind order andsymmetry. We think of Palladio’s opulent Venetian renais-sance villas or Frank Lloyd Wright’s austere modern “or-ganic” elegance. Order and symmetry were certainly at the

foundation of ideas underlying the concept of architecturalframeworks developed by John Zachman for designing busi-ness information technology applications in Zachman [1987].Since then, the word “business” was replaced with the word“enterprise,” but the notion of architecture in reference to thestructure of the business organization and the underlyingprocesses took deeps roots and endured.

As defined by Zachman [1987], an architectural frame-work is a set of rules defining the artifacts for describingenterprise system architectures. The idea is to deal with thecomplexity of enterprise systems by viewing them from avariety of perspectives. Zachman’s framework organizesthese artifacts into six perspective levels from the most gen-eral to the most detailed: Scope, Business Model, SystemModel, Technology Model, Detailed Presentations, and Func-tioning Enterprise. Within each perspective six artifacts ad-dress six questions: what, how, where, who, when, and why.Thus, in Zachman’s framework, a total of 36 artifacts areavailable to define system architecture. Rarely, if ever, are all36 actually used to present a particular architecture.

An architectural framework is a meta-model for describingcomplex systems. A meta-model is an example of “ontology,”a term that originally meant “the branch in philosophy con-cerned with the nature of existence and the relationshipsbetween things” used in computer science today to define theset of entity types, their properties, and relationship typeswithin a particular domain.

Zachman’s [1987] systematic presentation of the architec-tural artifacts in the form of a tabular framework was a giantstep forward. Suddenly, it all became clear. Order replacedchaos. Over time, this beautiful new paradigm found its wayinto many different walks of life, including the nation’s de-fense establishment. As defense systems became more andmore complex, with information technology penetrating theirevery aspect, architectural frameworks were adopted as ameans of organizing the many system modeling artifactsproduced by system developers for easier comparison andclassification. Today, DoD maintains a centralized repositoryfor architectural artifacts that have to be supplied by everymajor program as dictated by the Joint Capabilities Integra-tion and Development System (JCIDS) acquisition process[JCIDS, 2009].

What does the discussion about architectural frameworkshave to do with the original question about requirements?Well, systems being developed today are information technol-ogy intensive and combine software, hardware, and people.Because of the associated complexity, systems models haveto be looked at through the prism of architectural frameworks.There is a natural potential here for new progress arising fromcombining several disciplines that each focuses only on partof the overall problem.

3. DoDAF VIEWS BASICS

The area of defense activity that adopted the architecturalframework paradigm the fastest was the Command, Control,Communications and Computers Intelligence Surveillanceand Reconnaissance (C4ISR), the most highly penetrated bythe information technology. The first version of the architec-

MODEL BASED SYSTEMS ENGINEERING WITH DoDAF 307

Systems Engineering DOI 10.1002/sys

Page 51: Base de datos EBSCO

tural framework standard issued by DoD in 1998 was there-fore the C4ISR AF [C4ISR WG, 1997]. In 2003, it wasrenamed DoDAF 1.0 to indicate the wider scope of the newstandard [DoDAF WG, 2003]. DoDAF 1.0 was quickly fol-lowed by DoDAF 1.5 [DoDAF WG, 2007] in 2007 andDoDAF 2.0 in 2009 [DoDAF WG, 2009]. There is no end insight. Other defense organizations around the world are fol-lowing closely. The UK Ministry of Defence published itsMoDAF standards [MoD, 2010], and NATO has NAF[NATO, 2010]. The list also includes AGATE (France),DNDAF (Canada), MDAF (Italy), and ADOAF (Australia).

All architectural frameworks differ from each other. Forexample, they differ in the way their architectural artifacts arenamed and organized. The C4ISR and DoDAF were notderived from the Zachman’s Architectural Framework (ZAF)and do not quite duplicate it. While ZAF uses words like“perspectives” and “types of descriptions,” DoDAF 1.0 and1.5 use the word “view or views” for collections of artifactswithin the same perspective level. The term “products” is usedto denote the individual artifacts. DoDAF 2.0 uses the words“viewpoints” and “models”. In DoDAF 2.0 the models be-come “views” when populated with data. MoDAF and NAFintroduced their own innovations. In addition, civilian organi-zations have been publishing their own frameworks, such asTOGAF [TOGAF, 2010]. Initiatives are underway to define aunified set called Unified Profile for DoDAF and MoDAF(UPDM) [UPDM WG, 2010]. Initial mapping between Do-DAF, SysML, and AP233 draft ISO Standard for exchangingsystems engineering data has also been discussed [Bailey etal., 2005].

While DoDAF 1.5 was an incremental extension of Do-DAF 1.0, DoDAF 2.0 is significantly different, with newviewpoints such as Capability, Data and Information, Projectand Services, almost doubling the total number of artifactsand levels of perspective. Many of the underlying definitionsare being renamed and/or changed, including the Data Meta-model (DM2) that was known as Conceptual Data Model(CDM) in DoDAF 1.0 and 1.5. Terminology is being alignedwith ISO Standards, MoDAF, NAF, and ZAF. Semanticallyrelated concepts are organized into groups that are the datatypes of the DM2. DM2 is derived from the ontology devel-oped by the International Defence Enterprise ArchitectureSpecification (IDEAS) Group. The IDEAS model, in turn, isbased on “four dimensionalism,” which simply means that itcan be used to organize “things” with spatial and temporalextent. Operational and systems nodes have been eliminated.Nodes in DoDAF 1.0 and 1.5 were abstract logical conceptsthat were causing problems with inconsistencies in their use.Instead of nodes, the DoDAF 2.0 architects have to useconcrete concepts such as activities, systems, organizations,materiel, or their combinations. “Organizations,” “person-nel,” and “mechanization” have been generalized to “per-formers.” “Information exchanges” and “data exchanges”have been generalized to “resource flows” that can nowrepresent flows of materiel, information, people, and presum-ably other items.

The original intent of DoDAF was to provide a consistentformat for comparisons between architectures considered bythe Department of Defense acquisition community. With itsnewest version, DoDAF is evolving to support the recent

major changes within the Department, including the JCIDS,the Defense Acquisition System (DAS) Systems Engineering(SE), the Planning, Programming, Budgeting, and Execution(PPBE), and Portfolio Management (PfM). To provide moreflexibility, DoDAF 2.0 includes additional new viewpoints(types of models/views) but is moving its emphasis away fromthe format towards the data. DoDAF 2.0 encourages develop-ment of “Fit-for-Purpose Views.” These are user-definedviews customized for the specific need at hand. In this paper,we actually take advantage of the expanded flexibility offeredby this new version of DoDAF and describe several novelviews that we believe are needed to more fully describe theprocess we discuss.

Although based on DoDAF 1.0, the methodology pre-sented in this paper anticipated the flexibility offered byDoDAF 2.0, but full implications of all the changes intro-duced in the new version are still being investigated andperhaps will be published in a follow-up article. In the mean-time, the DoDAF 2.0 documentation published so far statesthat “products developed under previous versions of DoDAF,utilized as views, can continue to be used and continue to besupported” [DoDAF WG, 2009, Vol. 1, p. 18]. All that is reallyrequired for the following discussion is the knowledge ofDoDAF 1.0 Operational View (OV) and System View (SV).We refer the reader interested in all the DoDAF details to theoriginal DoDAF documentation. For readability of presenta-tion we will introduce now just the necessary bare bonesbasics.

DoDAF All Views (AVs) are summaries of the entirearchitecture. There are only two All Views: AV-1 is the Over-view and Summary Information for a textual top level descrip-tion of the architecture. AV-2 is the Integrated Dictionary withthe running list of terms, names and definitions used in thearchitectural products throughout. They are dictated by com-mon sense. Every project involving large teams working overlong periods of time can truly benefit from these two docu-ments. Anyone who worked on a large scale project willappreciate this. Without a common statement of objectivesand a common vocabulary, such projects can quickly deterio-rate into a chaotic state resembling the biblical tower of Babel.To avoid this scenario, AV-1 is used to describe the (hopefully)common project objectives, and AV-2 defines the project’scommon language. The AV-2 should not be limited to just alist of acronyms but should also include the definitions of allthe major terms in use by the program. Needless to say, bothdocuments evolve with the project and require active mainte-nance.

DoDAF Operational View (OV) focuses on the user/opera-tor: “What are the required activities and who will performthem?” User activities and the users performing them areorganized into (located in) operational nodes. Informationexchanges between activities take place across needlines con-necting the nodes. The hierarchy of operational nodes andactivities is the operational architecture. From the OV pointof view, the system is still a black box. In DoDAF 2.0 [DoDAFWG, 2009], the nodes are gone. The needlines connect loca-tions or organizations.

DoDAF System View (SV) presents the internal structureof the system answering the questions: “What is the systemsupposed to do to support user activities defined by the

308 PIASZCZYK

Systems Engineering DOI 10.1002/sys

Page 52: Base de datos EBSCO

Operational Views? What are its internal components? Howdoes it work?” User activities are supported by system func-tions. In other words, users perform their activities utilizingspecific system functions. System functions are activitiesperformed by system entities. System functions interact viasystem data exchanges across system interfaces between sys-tem entities. The hierarchy of system functions is the func-tional architecture. The hierarchy of system entities is thesystem architecture. The hierarchy of physical componentsimplementing the system functions is the physical architec-ture.

DoDAF Standards View (StdV) lists the applicable designstandards the system implementation has to comply with.This is analogous to the list of city and state codes used inbuilding construction. The contractor has to comply withthem and is expected to be familiar with those applicable inthe area where she or he does business. For example, anarchitectural vision of a house does not need to define theelectrical wire gauges or the requirement for junction boxes.The city and state building codes already specify such details.DoDAF 2.0 [DoDAF WG, 2009]renamed this viewpoint sim-ply “Standards View” from the “Technical Standards View(TV)” used in DoDAF 1.0 and 1.5 [DoDAF WG, 2003, 2007]to emphasize the wider scope of applicability intended for theDoDAF use.

There isn’t much written yet about DoDAF 2.0, but severalmethodologies for using DoDAF 1.0 to build system archi-tectures have been presented in the literature. The interestedreader should consult Levis and Wagenhalls [2000], Wagen-halls et al. [2000], Bienvenue, Shin, and Levis [2000], andRing et al. [2004]. These publications, however, are limitedonly to discussions of the process for development of archi-tectural artifacts, without showing how to use them for deri-vation of requirements. An interesting extension of the purelyarchitectural development is discussed by Bienvenue andGoodwin [2004]. Their objective was to establish traceabilitybetween requirements and architecture. The key thesis of theirwork was that no elements of the architecture should existwithout requirements and no requirements should exist with-out architectural elements. Requirements derivation was notdiscussed, assuming that a set of requirements is alreadyavailable from elsewhere. Model Based Systems Engineer-ing, however, postulates coupling the development of require-ments and models together in one complete iterative process.This theme has been actively explored by James Long [Vitech,2010], a very important pioneer in this field. An excellentdiscussion of MBSE using SysML is also presented by Hoff-man [2006]. Use of the system modeling in “requirementsanalysis and design loop,” where requirements and the systemmodel are iterated together, automatically resulting in consis-tency between the architectures and requirements, is the mainfocus of our paper, in which we are presenting an approachguided by DoDAF.

DoDAF documentation does not offer a specific method-ology for its application, neither in version 1.0 nor in version2.0. Similarly, neither Structured Analysis and Design Tech-nique (SADT) nor Object Oriented Analysis and Design(OOAD) derived languages are mandated or recommended.Consequently, many methodologies for developing DoDAFarchitecture are possible and equally valid. DoDAF 2.0 con-

tinues to be “toolset agnostic, allowing architects, and Archi-tectural Description development teams to utilize any toolsetthey desire” [DoDAF WG, 2009, Vol. 1, p. 28] as long as it isconsistent with the DoDAF Meta-model (DM2) PhysicalExchange Specification (PES). Likewise, there are no man-dated views. Selection of DoDAF products that should bedeveloped and the order of their development truly depend onthe problem at hand. For illustrating the methods described inthis paper, the architectural artifacts and the order of develop-ment as presented were deemed necessary and sufficient bythis writer. Development of a completely new system mayrequire additional views that are not discussed in this exam-ple. In practice, developing all available DoDAF views isnever needed and probably would be redundant. A reverseengineering problem would require developing the architec-tural artifacts essentially in reverse order. In a general case,the developer can expect that multiple iterations between alldeveloped DoDAF products back and forth will be necessaryas dictated by the systems engineering process. Again, origi-nal DoDAF documentation should be consulted as needed tolearn about the products not discussed in this paper.

The architectural artifacts and the system model in theillustration presented in this paper employ a nonstandardnotation that is closer to the graphical IDEF language derivedfrom the Structured Analysis and Design Technique ratherthan the SysML language derived from the Object OrientedMethod. However, there is no reason why the same ideascould not be implemented by means of SysML. The emphasisof this paper is not on language rigor because its principalobjectives are to discuss the use of DoDAF to build the systemmodel and the use of system model for deriving requirements,and a very formal language could potentially obscure thepresentation. Any computer implementation would, ofcourse, require a more formal use of language.

4. REQUIREMENTS

A requirement is simply an agreement between two parties.A requiree agrees to fulfill a desire expressed by a requirer.As the matter has evolved, a requirement in systems engineer-ing is a statement containing the word “shall.” To write oneof those statements, a requirements analyst begins with “Thesystem shall” and follows with the desired system charac-teristic. Requirement writing is an art. They must be “unam-biguous,” which is probably the most difficult part. The bestpractice references repeat that every requirement has to be“verifiable” so that it can actually be determined if the require-ment is satisfied or not. The best practice also calls forrequirements that are “atomic” so that each can be verifiedindependently of others. Many pages have been written aboutrequirements writing. Reference [Hooks, 1990] provides anexcellent discussion going way beyond just simple require-ments writing tips.

Requirements are organized in a natural hierarchy. At thetop, there are the originating stakeholders’ requirementswhich represent the desired system from the originating stake-holders’ point of view. Originating requirements are a modelof the system. In most cases, these requirements are in a formof a textual description of the stakeholders’ vision. Originat-

MODEL BASED SYSTEMS ENGINEERING WITH DoDAF 309

Systems Engineering DOI 10.1002/sys

Page 53: Base de datos EBSCO

ing stakeholders’ requirements usually consist of a very highlevel definition of the desired functionality and the associatedperformance. Sometimes, however, these stakeholders maychoose to specifically define a particular detail of the designthat represents a constraint on the system. Constraints mayalso be systemwide. For example, a desired technology com-patibility with other systems already in stakeholder’s posses-sion can dictate specific technology choices. A typicaldifficulty with originating requirements is often their impre-cise formulation. With multiple stakeholders, some require-ments may even be in conflict with each other. In fact,sometimes an entire requirements elicitation effort is requiredto clarify them. In this very first step, it is necessary totranslate the stakeholder’s vision into a set of precise, logi-cally organized, verifiable, and unambiguous “shall” state-ments.

It would be too easy if these top level system requirementscould be written once and for all at the very beginning. Thebest we can do, in real life, is to use the “onion” [Childers andLong, 1994] approach with iterations. According to the “on-ion” approach, this top level requirement set is written as wellas humanly possible and then iterated against the first-ordermodel of the system. The feedback from the model is used torevise the top-level requirement set, and the process is contin-ued to the next layer of the systems engineering “onion”. It isvery desirable to complete each layer as far as possible, butthe nature of the problem is such that it may be necessary toiterate across several layers multiple times. This aspect of therequirements analysis process presents management issues asproven by many failed projects. Gradually, however, a satis-factory set of requirements and the corresponding systemmodel emerge together. Once agreed upon by all active stake-holders or, in other words, “baselined,” the top level require-ments constitute the basis of a “contract” between theoriginating stakeholders and system developers. This set oftop level requirements is organized into a System Require-ments Document (SRD). Usually, a System RequirementsReview (SRR) is held to finalize this “baselining” process.

A typical SRD outline includes required top-level systemcapabilities (required system functions and their associatedperformance) as well as external and internal interface re-quirements. SRD also includes nonfunctional systemwiderequirements that impose constraints on technology selection,limits on major characteristics such as weight and cost and the—ilities, i.e., the desired reliability, availability, maintainabil-ity, etc. The top-level functional requirements are decom-posed and organized into a model known as the functionalarchitecture, which is basically a hierarchical structure ofsystem functions and their interactions. Once this structure inturn is baselined, it becomes known as the Functional Base-line, and a System Functional Review (SFR) is held to sealthe agreement between all stakeholders as another majormilestone of the SE Requirements Analysis and System De-sign process. Functional Configuration Audit (FCA) is acorresponding dual event in the Verification process in whichthe system’s ability to actually perform the required functionsis verified.

The next level of the requirements hierarchy is called thesystem or technical requirements set. It is also known as thederived requirements set because these requirements are de-

rived from and traced to the originating requirements. Thesystem requirements hierarchy is organized in many differentlevels, from the top level view of the system as a whole toprogressively increasing details of the system elements: seg-ments, subsystems, assemblies, and components. System re-quirements are a description of the logical model of thesystem. As such, they are determined by the logical structure.Once baselined, this model and the corresponding require-ments are formally reviewed at the Preliminary Design Re-view (PDR). The approved model is known as AllocatedBaseline because the requirements are now allocated to theelements of the system.

The final level is a set of physical requirements that definethe actual solution system satisfying all stakeholder require-ments and all derived technical requirements. These physicalrequirements document the choices made by the system de-signers and list the specific characteristics of the selectedsystem elements, e.g., the processor model number, through-put, and memory capacity. Note that the physical require-ments documentation is what is commonly referred as“system specs,” although the term is also confusingly appliedto the stakeholder and technical requirements documents. Thecorresponding system model is now the Product Baseline andthe review is the Critical Design Review (CDR). Duringverification, the process of checking the actual system againstthe Product Baseline is called the Physical ConfigurationAudit (PCA).

The hierarchical structure of requirements is enforced byrequirements traceability. Requirements traceability is imple-mented as a chain of requirements links directed from thelower level to the upper level. Each lower level requirementderived from a higher level requirement is linked to its parent.Thus, physical requirements are linked to the system require-ments, which in turn are linked to the functional and to theoriginating stakeholder requirements. This traceability chainallows one to determine with exacting precision the effects ofany single change of a stakeholder requirement on all thelower level requirements that have been derived from it. Thisinformation allows systems engineers to assess the impact ofthe proposed change on the system performance, cost, andschedule. Specialized tools exist for management of require-ments and their traceability. These tools, usually storing therequirements text and link information in a database, are usedto provide change impact reports. In addition, traceability isused to identify the stakeholder requirements that can beimpacted by a verification failure for a technical requirementin a test.

While in the fully developed model-based systems engi-neering paradigm, this will be accomplished with the modelalone, in the meantime the requirements and specifications areused to document the consensus reached by the acquirers,users, and developers on the characteristics of the complexsystem that is to be developed. For each layer of the systemsengineering “onion,” this requires documenting a set of re-quirements and the corresponding solutions. Thus, alternatinglayers of requirements and design documents are needed forevery level. Rather than reinventing the wheel, the systemsengineer can use DoD Data Item Description (DID) templatesfor neatly organizing the requirements and design documentsaccording to a set of standard outlines. They are available, for

310 PIASZCZYK

Systems Engineering DOI 10.1002/sys

Page 54: Base de datos EBSCO

instance, at http://www.everyspec.com or http://en.wikipedia.org/wiki/MIL-STD-498. These DID templates which wereoriginally part of the MIL-STD-498 are commonly used inthe defense and aerospace industry, even though this standardwas officially cancelled. MIL-STD-498 was written to assistin acquisition of software systems and gained wide accep-tance. This standard or parts of it were merged with MIL-STD-499 into the IEEE/EIA 12207 and eventually intoISO/IEC 15288. The DIDs include DI-IPSC-81431, Sys-tem/Subsystem Specification (SSS) [SSS DID, 2010], whichis a template for the generic/hardware requirements docu-ment. DI-ISPC-81433, Software Requirements Specification(SRS) [SRS DID, 2010], is a template for the analogousdocument for a piece of software. DI-IPSC-81434, InterfaceRequirements Specification (IRS) [IRS DID, 2010] is a tem-plate for system interface requirements. DI-IPSC-81432, Sys-tem/Subsystem Design Description (SSDD) [SSDD DID,2010], is a template for the set of physical requirements thatdefine the solution. DI-IPSC-81435, Software Design De-scription (SDD) [SDD DID, 2010], is a template for softwarespecification. DI-IPSC-81436, Interface Design Description(IDD) [IDD DID, 2010], is a template for interface specifica-tion. All DID templates are intended to be tailored to theproject needs.

5. REQUIREMENTS DERIVATION FROM THEMODEL

The traditional systems engineering process is widely known,and there is no reason to further discuss it here. A number ofvery good sources, such as the INCOSE SE Handbook [IN-COSE, 2010], are available to the reader. Basically, it startswith system’s requirements analysis and design and ends withsystem integration, verification and validation (V&V). It isthe first part of this process that we are concerned with in thispaper, although we are also touching upon the use of theModel Based Systems Engineering artifacts in the V&V part.

Dennis Buede showed [Buede, 1995] how the derivedrequirements can be generated directly from the IDEF mod-els. The following discussion combines his ideas with Do-DAF views to guide the requirements analysis and systemdesign.

For readability and clarity of the presentation we used arather simplified example that, as we hope, does make thedigestion of this material a little easier. Naturally, it is to beexpected that in any typical project the complexity of theanalysis would exceed that of the example presented in thispaper by orders of magnitude.

The discussion starts within the model-centric paradigmby first building a model of operator activities with DoDAFoperational views. These operational views are then used toderive the set of operational requirements. Next, the threelayers of the system models (functional, system, and physical)are built. For each layer of the system model, a set of require-ments is derived. The relationships between the model arti-facts and the requirements lists created in this process bridgethe gap between the model-centric and the document-centricworlds.

A number of tools are available on the market that can beused to perform the analyses described herein, although theauthor is not familiar with a single tool that does it all. Theintent of this presentation is to explain a general approachrather than discuss the capabilities of any specific tool. Con-sequently, the discussion of all the commercially availabletools is carefully avoided.

6. BUILDING THE MODEL OF OPERATORINTERACTIONS WITH THE SYSTEM

As a first step of the systems engineering process, usingstakeholders’ originating requirements and assistance fromthe subject-matter experts, systems engineers develop opera-tional scenarios that define how the users expect to use thesystem. The objective is to define the desired behavior of asystem. Both “sunny day” as well as “rainy day” scenarios aredeveloped to make sure that system behavior is completelyunderstood under all circumstances. Scenarios are then ana-lyzed into “use cases” or elementary interactions between theusers and the system. Each use case adds to the set of userrequirements. Multiple use cases define various aspects ofsystem usage. We will show how these use cases can bedefined with DoDAF’s Operational View products.

An interesting and useful aspect of DoDAF OperationalViews is the emphasis on the user/operator, i.e., the actor whodirectly interacts with the system. Exploration of the opera-tional views can provide significant insight into the interac-tions between the operators themselves and between theoperators and the system. Clearly, the operational views rep-resent the top layer of the systems engineering “onion,” theoperator layer. Once this first layer is “peeled off,” the nextlayer is the engineered system itself, consisting of the equip-ment the operator interacts with. This layer is described withSystem Views. Subsequent layers consist of system segments,subsystems, and configuration items. Individual SystemViews are created for each of these layers. Each layer is“peeled off” in sequence, and the design of the next layer isstarted when the one above is reasonably complete. Iterationsacross the layers refine the design as needed.

We will now step through the complete process illustratingour methodology with an example of a system intended foruse to search or monitor a predefined area for objects classi-fied as threats, detect them, track them, engage them, andassess the effectiveness of the engagement. These are thedesired system capabilities. It could be a missile defensesystem for an entire planet or a single defense site. It couldalso be a sales business enterprise establishment that searchesthe market for potential customers, tracks them, and attemptsto sell them their product. Granted, most entrepreneurs don’tthink of their marketing and sales force as sensor, tracker, andweapon operators, but, in terms of systems modeling abstrac-tion, the analogy is not that farfetched. Of course, this exam-ple is not a real missile defense system either, but just a devicewe are using to present our methodology.

As an introductory view, the DoDAF OV-1 is intended torender the first impression of the system concept and theassociated concept of operations. Sometimes, an OV-1 isactually incorrectly referred to as the Concept of Operations

MODEL BASED SYSTEMS ENGINEERING WITH DoDAF 311

Systems Engineering DOI 10.1002/sys

Page 55: Base de datos EBSCO

(CONOPS). It really only augments the “document-centric”textual CONOPS, which typically contains descriptions ofsystem usage described in the form of scenarios that can beused to construct multiple OV-1s, one for each use case.

Figure 1 presents the OV-1 for this system’s principal“Search and Destroy” use case. Our conceptual design (first-order system model) consists of a sensor, tracker, weapon, anda mission planner and manager subsystems. The operatorsinclude an Officer-In-Charge (OIC), Sensor Operator (SO),Tracker Operator (TO), and a Weapon Operator (WO). Theoperators interact with the system controlling its behavior.The system also interacts with the targets in its mission area.The OIC is reporting to the Commanding Officer (CO).Clearly, the CO represents a higher level operator than theremaining crew. For the CO, the only interface with the systemis the OIC, and the system she (or he) is looking at is differentthan the system the OIC is interacting with. This type ofhierarchical structure of the operational layers of the systemis typical for many enterprise type systems, and DoDAF OVsare a perfect tool for unraveling all their corresponding details.Of course, in real life, in addition to formal structures, humanscreate many informal networks with interfaces that may bedifficult to discover, identify, and characterize.

The OV-2, Operational Node Connectivity, renamed toOperational Resource Flow Description in DoDAF 2.0, isshown in Figure 2. During the mission execution, the opera-tors perform various activities and exchange information. Inthe scenario being envisioned here, it is assumed that inperforming their activities and exchanging information theoperators utilize the system as noted in the lists of activities.We have chosen not to show the system in this diagrambecause at this stage it is only a black box and every one ofthe operators is interacting with it. Adding it to the picture

would only obscure the information we are trying to focus on.At this stage let us just imagine a box representing the systemunderlying the entire picture. Today, the operators typicallyinteract with the system by means of computer consoles withdisplays, keyboards, and joysticks or mice. It is easy to seethat the keyboards and mice are being gradually replaced withtouch-sensitive screens. Future human machine interfacesmay become more direct with sensitive electrodes picking upbrain waves. All of this is not relevant at all to a systemsengineer at this early stage of system conceptualization. In anOV-2, the operators reside in operational nodes connectedwith pipes called needlines that provide the conduits for theinformation exchanges. Essentially, the OV-2 refines the usecase first introduced with the OV-1 by adding just enoughdetail to understand the “business process.”

Our OV-2 defines the activities for all operators (e.g., “COIssues Mission Orders” and “OIC Receives Mission Ordersfrom the CO”) and information exchanges (messages) that arepassed between them (e.g., “Mission Orders”). Strictly speak-ing, the DoDAF OV-2 format does not show the contents ofthe information exchanges flowing through the needlines. Inour proposed DoDAF extension, taking advantage of theflexibilities allowed in DoDAF 2.0, we list them in the dia-gram. This will help us in the requirements generation processthat is discussed later. The “operators” in the OV-2 couldrepresent organizations rather than individuals. Generally, theoperational nodes contain multiple roles, each performing itsshare of activities. Operational nodes are typically used torepresent geographical (spatial) separation. DoDAF 2.0 didaway with operational nodes. Instead, the documentation callsfor “concrete” definitions. In our example, these concreteentities are simply the Operators.

Figure 1. OV-1, Search & Destroy System [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]

312 PIASZCZYK

Systems Engineering DOI 10.1002/sys

Page 56: Base de datos EBSCO

An OV-2 provides an understanding of the “spatial” struc-ture of the operational part of the system. The operationalnodes contain operators organized into multiple roles, per-forming specific activities and communicating informationexchanges across needlines connecting the nodes. The sumtotal of the activities defined in the use case represented bythe OV-2 adds up to an operational capability, a portion of acapability, or multiple capabilities. In this particular case, theOV-2 for the system in question illustrates the “Search andDestroy” capabilities. Note that the OV-2 is a static view thatcan represent the operational structure at a given instant intime. If the operational structure is reconfigured at some otherinstant of time, a new OV-2 is required to describe it. Suchreconfiguration, although a relatively frequent occurrence inreal life, will make the analysis much more complex.

Of course, in addition to the spatial extent, the operationsalso extend in a time dimension. The time structure (dynamicbehavior) of the system can be shown as in Figure 3 byplotting each node’s history as a vertical line and showing theinformation exchanges as messages passed between them. InDoDAF, the sequence diagram is called an OV-6c. OV-6canalyzes the time dimension. In UML, this is known as asequence diagram. In OMG’s Business Process ManagementNotation (BPMN), this diagram is also known as a “Swim-Lane” diagram, showing the evolution of the “business proc-ess” in time and including the interactions between all theplayers. The information content is about the same as in OV-2,but the OV-6c allows a different perspective. One can now seeexplicitly how the operations evolve with time. Alternatively,the OV-6c could also be used as the first tool to discover thecontents of information exchanges, instead of the OV-2. Thisis often the case. In practice, these two diagrams are simulta-

neously iterated together more than a few times before allactivities and messages are discovered, given consistentnames, and traced through to double check correctness. If amessage is being sent between two operational nodes in anOV-6c, a needline must also exist between the same two nodesin the corresponding OV-2. Any operational node activityallocated to a node in the OV-2 must show up in the same nodein the OV-6c. Analysis of the OV-6c helps to bundle opera-tional activities into operational roles and to combine themfurther into operational nodes.

The OV-6c also clearly shows the operational layers wementioned before. The arrows to and from the CO alwaysbegin or end with the OIC. On the other hand, the OICinteracts with all the remaining operators. Note that the inter-actions of the operating crew with the system are included inthe boxes on the timelines. It is entirely possible to show thesystem as another timeline in the diagram, and it is often done.However, as the details of the system design are not yet knownit would not be possible to show more than one timeline forthe system at this stage. The system is still only a black box.

The OV-4 (Organization Chart) is used to record the rolesof each operator / organization consisting of operationalnodes (or contained in operational nodes) as shown in Figure4. Although in this simplified example, each organizationconsists only of a single node, each with a single role, a nodecould naturally contain more than one user. For illustration,roles consisting of sets of activities performed by an organi-zation are shown next to it in our diagram. For more complexexamples, it is not always possible to list all activities in sucha fashion due to the lack of space. The information collectedin an OV-4 defines the “human dimension” architecture inaddition to the “spatial and time dimensions” that were de-

Figure 2. OV-2, Operational Architecture Diagram, presents the operational nodes and needlines between them.

MODEL BASED SYSTEMS ENGINEERING WITH DoDAF 313

Systems Engineering DOI 10.1002/sys

Page 57: Base de datos EBSCO

fined in the OV-2 and OV-6c. This is an extremely valuabletool for systems engineering an enterprise where humans playa big part in the operation of the system. Human roles andtheir relationships can be analyzed with DoDAF OVs in directsupport of the Human Systems Integration efforts.

Our “Search and Destroy” use case is analyzed into severalsub-use-cases: Plan and Manage Mission, Search and Detect

Target Objects, Track Target Objects, Engage Target Objects,and Assess Engagement Effectiveness. This analysis breaksthe initial complexity of the problem into manageable chunks.The resulting hierarchy of activities can be shown as in Figure5. In DoDAF, this diagram is called OV-5. Each box in thisdiagram represents an activity performed by an operator. This

Figure 3. OV-6c, Operational Activity Sequence Diagram, shows lifelines for each operational node.

Figure 4. OV-4, Organizational Relationships Diagram, defines the human roles.

314 PIASZCZYK

Systems Engineering DOI 10.1002/sys

Page 58: Base de datos EBSCO

type of representation of the use case is quite familiar to UMLusers.

The name of the corresponding operator underneath eachactivity has been added to show the correspondence betweenactivities and human roles. Strictly speaking, this annotationis not part of the DoDAF format defined in version 1.0 or 1.5;however, we can claim it as being part of the flexibility offeredby DoDAF 2.0. This extension of DoDAF format is helpfulto see which user performs which activity. In IDEF0 it corre-sponds to a mechanism.

OV-5 exists in two distinct forms. In DoDAF 2.0, theyactually received two different names, with OV-5a used forthe form shown in Figure 5. The alternative form, shown inFigure 6, is the Activity Model, OV-5b. This version of OV-5

shows how activities, performed by operators, communicatevia information exchanges. This OV-5 complements the OV-2. The OV-2 is showing all the activities and exchanges thatare possible between the operational nodes while OV-5b, theActivity Model, shows the logic of the operational process.While OV-2 represents the spatial domain, and OV-6c the timedomain, the OV-5 represents the abstract domain of activities.All diagrams have to be consistent, using the same names forthe same entities (nodes, activities and messages).

The sequence of activities and messages from the OV-6csequence diagram has to be traceable in both the OV-2 andOV-5. From OV-6c, the “search and destroy” business processbeing discussed begins with a set of “Mission Orders” fromthe OC entering the “Plan and Manage Mission” activity

Figure 5. OV-5a, Activity Hierarchy, presents the decomposition of operational activities in hierarchical form. [Color figure can be viewed inthe online issue, which is available at wileyonlinelibrary.com.]

Figure 6. OV-5b, Activity Flow Diagram, presents the interfaces between activities. [Color figure can be viewed in the online issue, which isavailable at wileyonlinelibrary.com.]

MODEL BASED SYSTEMS ENGINEERING WITH DoDAF 315

Systems Engineering DOI 10.1002/sys

Page 59: Base de datos EBSCO

performed by the OIC. The “Plan and Manage Mission”activity generates “Search Patterns and Threat Charac-teristics” and passes these on to the “Search and Detect TargetObjects” activity that is performed by the SO. The “Searchand Detect Target Objects” activity generates “Target ObjectReports” that are sent to the “Track Target Objects” activityperformed by the TO. This activity in turn generates “TargetTracks” reports that are passed to the “Engage Target Objects”activity performed by the WO.

“Engage Target Objects” activity generates a WeaponsRelease Request that is sent back to the “Plan and ManageMission” activity. “Plan and Manage Mission” activity gen-erates a “Weapons Release Request” that is sent outside thesystem. When it comes back as a “Weapons Release Authori-zation”, it is repeated to the “Engage Target Objects” activity,which in turn releases the Weapon and generates the “WeaponRelease Notification” message to the “Assess EngagementEffectiveness” activity.

The “Assess Engagement Effectiveness” activity has beenallocated to the SO in anticipation of the fact that in the systemconcept to be discussed later, the corresponding system func-tion is allocated to the Sensor subsystem. Note that, foranother system concept, this function could be allocated to yetanother subsystem and this activity could be allocated toanother operational node. This is why the iterations areneeded in system analysis and design. Should a better solutionbe found in the course of analysis, the system design ismodified. It is much less costly to perform such major adjust-ments to the system model rather than to the detail systemdesign in a later phase. This is how the value of systemsengineering shows up in general: return on investment is thecost of mistakes avoided.

7. DERIVING OPERATOR REQUIREMENTSFROM OPERATIONAL VIEWS

We will now show how the model-centric operational archi-tecture can be translated into a document-centric list of opera-tional requirements. The bridge between the model of thesystem, an architectural framework and the derivation ofsystem requirements is the central thesis of this paper. In thissection, we are showing the process of deriving the require-ments from the operational views, using the OV-2 in particu-lar. It is imperative, however, that the other OVs discussed inthe foregoing (OV-4, OV-5 in two flavors, and OV-6c) areconsistent with the OV-2 that we are using to write downrequirements. In other words, we don’t start writing the re-quirements until the OV diagrams are all consistent.

Note that from the point of view of the top level stakehold-ers, the requirements being discussed here are the derivedoperational requirements. If operators were considered partof the system, these would be derived system requirementsand their “activities” would be “system functions,” and wewould use the SV products to analyze them. The systemboundary is, to some extent at least, arbitrarily defined by thesystems engineers.

Once the Operational Views are all consistent with oneanother, we can say that they have converged or that theprocess of their development has converged. One can have areasonable expectation that a converged set of views is com-plete. At this point, we simply redraw the OV-2, convertingactivities into requirements by adding “shall” statements asappropriate. For example, as shown in Figure 7, the statement“Sensor Operator conducts area search using the sensor sub-system” is translated into “Sensor Operator shall conduct areasearch using the sensor subsystem.”

Figure 7. “Operator” requirements derived from the OV-2.

316 PIASZCZYK

Systems Engineering DOI 10.1002/sys

Page 60: Base de datos EBSCO

A complete OV-2 diagram lists all the nodes/operators,activities, and messages involved in the use case. Hence,through repeated application of the process described above,we obtain a list of “shall” statements that completely charac-terize the use case. In this way, the OV-2, which is a repre-sentation of the use case under consideration, is “serialized”into textual format. Figure 7 shows how the requirementsnaturally follow from the OV-2 for the Search and Destroyuse case. Clearly, the old dictum that “a picture is worth athousand words” is confirmed here.

Figure 8 shows a fragment of a list of the “operator”requirements derived from our OV-2 in tabular form. This setof “operator” requirements can be used to plan a series ofverification tests. These tests will be performed after thesystem is finally built. When all the “Operator shall” state-ments in this list are successfully tested and the performanceis satisfactory, the system will be verified and on its pathtowards final validation. After all, the system was built tosupport these activities. Anyone who, at some point in her orhis career, had to write a Test Plan for a program test that isscheduled for the proverbial “yesterday” will agree that hav-ing such requirements would “make her or his day.” Availabil-ity of an OV-6c which exactly spells out the sequence ofactivities should be a “life saver” for development of testprocedures. We will show in the following how the use ofDoDAF views to generate requirements can be extended toall other types of requirements that will assist the test planwriters even further.

It is natural, in the context of DoDAF, to call these “opera-tor” requirements “operational” since they are derived from“Operational Views.” DoDAF views were originally intendedfor and are frequently used at a very high level where the“operators” are entire military organizations with operationalviews describing their operations in military missions. Hence,the term “operational” views used in reference to these Do-DAF products. In the simple example presented in this paper,we are considering a system at a much lower level, and theterm “operator” is more appropriate. Thus, we stated that wederived and allocated a set of “operator” requirements. Wechose to use this terminology here to emphasize the differ-ence. In other situations, the word “operational” will be moreappropriate. Note that, in this way, the “operator” require-

ments were also automatically allocated to the operationalnodes. The list of requirements can be entered into the require-ments management database tool. Linkage to the list of archi-tectural elements from OV-2 may then be recorded for eachrequirement with an appropriate attribute of each requirementrecord in the database. This linkage is also called allocation.

It is a good idea for stakeholders to review and approvethis set of “operator requirements” before proceeding to thenext phase as part of the Requirements Validation process. Inconjunction with the associated Operational Views that wehave used to generate them, the “operator” requirements nowform a rather detailed description of the intended use of thesystem. All operators and their activities are defined and usagescenarios can be traced through to make sure that what isbeing proposed satisfies the stakeholders’ vision.

8. BUILDING A FUNCTIONAL MODEL OF THESYSTEM

The objective of the system functional analysis is to create afunctional architecture. Functional architecture is a hierarchyof functions that the system must be capable of performing.These functions are derived from the operational capabilitiesthat the stakeholders desire to have. The functional architec-ture provides the foundation for defining the system architec-ture in the next step through the allocation of functions andsubfunctions to the proposed system entities.

So far, our use case has been detailed with the set of OVs.The point of view now changes from user-centric to system-centric. However, the system described as a functional archi-tecture is a layer of abstraction, still independent of the actualphysical implementation. If done correctly, this functionalarchitecture layer of abstraction will remain the same in spiteof the changing technologies. The concept of separation ofconcerns through layers of abstraction in systems engineeringdesign is a method for dealing with complexity [Dijkstra,1974]. The overall idea is to create a layered schema analo-gous to the Open Systems Interconnection (OSI) stack [Zim-merman, 1980] used for networking computers.

For the system in our example, the proposed hierarchicaldecomposition of system functions, SV-4, in Figure 9 closely

Figure 8. “Operator” requirements. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]

MODEL BASED SYSTEMS ENGINEERING WITH DoDAF 317

Systems Engineering DOI 10.1002/sys

Page 61: Base de datos EBSCO

mirrors the hierarchical decomposition of activities. Suchclose correspondence between functions and activities is notunusual. Systems are, after all, built to support the activitiesthat the operators are supposed to perform with the system. Infact, the difference is so subtle that at first sight it may bedifficult to grasp. At least one system function is required foreach activity, although sometimes the same system functioncan support several different activities. Human beings excelin finding new uses for existing systems and their functions.

Analysis of activities performed with DoDAF operationalviews provides insight into the very first layer of the systemsengineering “onion.” The inner layers of the “onion,” repre-senting the system itself, are analyzed with system views. Toremain in the clear, one has to keep in mind that the activitiesare operator-centric (“Operator shall…”) and system func-tions are system-centric (“System shall…”). The word “sys-tem” here means the next layer of the “engineered” systemcomposed of hardware, software, and “people-ware,” like thesystem envisioned in Figure 1. Typically, as technologyevolves, system functions performed by humans are graduallyreplaced by automation or at least mechanized.

Functional architecture can have many levels of hierarchy.The first level of system decomposition, shown in Figure 9,corresponds to major functions performed by the system. Oneneeds to be a little careful here. The “Mission PlanningFunction” the system performs is substantially different fromthe “Plan Mission” activity performed by the OIC in the OV-5in Figure 6. Perhaps more aptly, it could also be called a“Mission Planning Support Function.” While the OIC actu-ally performs the mission planning activities using the system,the system “only” supports the OIC with the mission planningfunctionality. In the next level of decomposition, this functionmay include subfunctions providing access to libraries ofgeographical maps, capabilities for charting course of themissiles on the screen for situational awareness, flight modelsof weapons and threats for generating engagement predic-tions, etc.

Depending on the level of automation permitted by thetechnological state of the art, the balance of the missionplanning capability may be divided between the human activ-ity and the system function to varying degrees. For example,the portion of what is termed “intelligent reasoning” that isbeing implemented in software is constantly increasing. Thisdivision is intuitively obvious to any developer, so that, at thepoint in time when one is developing any specific design, sheor he is able to assess how much of the functionality shouldbe allocated to the computer and how much to the human.

With computer technology moving at an ultrafast pace, it isimpossible to learn this skill in college and use it for the restof one’s life. Systems engineers have a delicate task becausethey may be pushing the state of the art with their conceptsand need a reality check from the developers here. This is oneof many points where design on the cutting edge will forceiteration of the systems engineering process.

The process of developing the functional architecture israther straightforward. In the next level, each function isdecomposed into its subfunctions and so on. This paper willremain at the single level, as additional levels would add littlevalue to the presentation while the complexity would increasemultifold.

Although functional architecture is an abstract of thephysical implementation, it does influence the physical de-sign. Major system functions are typically performed bymajor subsystems at the next level of the system’s organiza-tion. Those top-level subsystems are called system segments.It is also possible to bundle several system functions into onesegment/subsystem. Principles of “loosely coupled design”dictate the segmentation. All closely coupled functions shouldbe lumped together while the segments themselves shouldhave relatively few interactions. The design is usually iteratedat this stage with inputs from the experts from many disci-plines assembled into a Systems Engineering Integrated Prod-uct Team (SEIPT).

It is important to keep track of the relationships betweenthe system functions and operational activities because theyare a link in the requirements traceability chain. The completerequirements traceability chain flows from stakeholders’ re-quirements to operational requirements to functional require-ments to system requirements to physical requirements. Thetraceability from functions to activities connects functionalrequirements and operator requirements. A requirementsmanagement tool typically records the trace between func-tional requirements, operator requirements, and stakeholderrequirements directly as attributes of the database records. InDoDAF, a convenient place to record the allocation of systemsfunctions to operational activities is the matrix shown inFigure 10, called SV-5.

Like OV-5, the SV-4 has its “flow” counterpart versioncalled the Data Flow Diagram. It is shown in Figure 11. Theboxes now represent system functions, and the lines are thesystem data exchanges. The essential difference between theSV-4 and OV-5 is that the operator layer was “peeled off” andthe SV-4 diagram is showing the system’s abstract functionalarchitecture. Although DoDAF 2.0 recognized the two alter-

Figure 9. SV-4, system functional hierarchy. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]

318 PIASZCZYK

Systems Engineering DOI 10.1002/sys

Page 62: Base de datos EBSCO

native forms of the OV-5 by assigning them different names,OV-5a for the activity tree and OV-5b for the resource flow,no such recognition was given to the two forms of the SV-4,but the text does identify them as the “Taxonomic FunctionalHierarchy” and the “Data Flow Diagram.” The concept of“Data,” however, has been subsumed by the concept of “Re-source,” which now, in its full generality, can represent flowsof materiel, people, and information.

The system data exchanges between system functions inthe SV-4 Data Flow Diagram are different from the informa-tion exchanges between operational nodes in the OV-5, Infor-

mation Exchange Diagram. The relationship between the twosets can be obtained by means of a transformation based onthe relationships between the system functions and activitiesrecorded in an SV-5.

The SV-4 views will evolve along with the OperationalViews during the systems engineering requirements analysisand design process. Although the Operational Views devel-oped in the first phase of the analysis and design might havebeen considered complete and converged, the functionalanalysis of the system may require modifying some of themas new ideas are being introduced. Hence, we need to extend

Figure 11. SV-4, system data exchanges flow diagram. [Color figure can be viewed in the online issue, which is available at wileyonlineli-brary.com.]

Figure 10. SV-5, allocation of system functions to activities. [Color figure can be viewed in the online issue, which is available atwileyonlinelibrary.com.]

MODEL BASED SYSTEMS ENGINEERING WITH DoDAF 319

Systems Engineering DOI 10.1002/sys

Page 63: Base de datos EBSCO

the notion of system model’s convergence to include the SV-4as well. Some consideration in developing the system func-tional architecture must also be given to feasibility. If aparticular system function is not feasible with today’s tech-nology, the system will not be realizable and the entire prob-lem must be reexamined from the beginning.

While the concept of system data exchanged betweensystem functions for C4I or enterprise IT automation systemswhere it was originally developed presents no problems,extending this idea to interactions between functions providedby pieces of “real” hardware requires a leap of imagination.Hardware can implement functions such as “Provide Struc-tural Support.” What possibly can be the “system data” thatflows between two “provide structural support” system func-tions? Mechanical stresses, of course. Similarly, heat flow isthe “data” flowing between the cooling and heating systemfunctions, etc. Mechanical engineers have a lot to learn in thisnew paradigm. In their world, a use case is typically some-thing called the maximum load condition. DoDAF 2.0 gener-alized the concepts of “Information” used in the OV and theconcept of “Data” used in the SV to “Resource.” In addition,“Resource” can now represent objects other than information,including materiel and people. Although transmission ofstresses or heat is not explicitly mentioned in the DoDAF 2.0documentation [DoDAF WG, 2009], this generalization per-mits their inclusion.

9. DERIVING SYSTEM FUNCTIONALREQUIREMENTS FROM THE FUNCTIONALVIEWS

We are ready now to translate the model-centric functionalarchitecture into a list of functional requirements. Since wehave already gone through this process once for the opera-tional architecture, it should not be too difficult to go throughit again, this time for the functional architecture.

Using the SV-4 (System Data Exchange Flow Diagram),functional requirements can now be written by inspectionfrom Figure 11. Examine each system function in the SV-4diagram and observe the inputs it accepts and the outputs itproduces. The corresponding functional requirements are for-mulated as follows: “Function A shall transform Input B intoOutput C,” “Function A shall accept Input B,” and “Function

A shall produce output C.” A subset of such functional re-quirements derived in this way is shown tabulated in Figure12. The functional requirements are thus automatically allo-cated to their respective functions. Each function correspondsto several requirements, depending on the number of inputsand outputs. Functional requirements and their allocationscan be tracked with such a table. This table is not part of thecurrent versions of DoDAF, although some of the availablearchitecturing tools can implement this traceability with cus-tom designed scripts. It is typically implemented with arequirements management tool. Alternatively, it can be re-corded in a spreadsheet. All such forms of documentation arenow permitted under DoDAF 2.0.

We have thus shown how the DoDAF functional architec-ture system views (SV-4) can be used to derive a complete setof functional requirements. Functional performance require-ments can also be written now for every system function. Eachperformance requirement can be formulated as “Function Ashall produce output C at level D with accuracy E,” or similar.If the functional analysis has converged, the functional re-quirements are consistent with the functional architecture.The term “convergence” is used here to emphasize the itera-tive nature of the process of generating a set of SV-4 diagramscorresponding to the OV-2 and OV-5 via the SV-5.

10. SYNTHESIZING THE SYSTEM

The next step in the systems engineering process is designsynthesis. System design synthesis process transforms thefunctional architecture into system architecture and later intophysical architecture. We are differentiating between a systemarchitecture where the entities are logical modules and thephysical architecture with actual physical components. Thesecould be one-to-one, but a physical architecture layer decou-pled from the system architecture provides another layer ofabstraction to deal with the technology evolution and theassociated changes.

The physical implementation of the system must be con-sistent with the technical maturity and acceptable technologydevelopment risks for the constituent subsystems and compo-nents at any given technology state of the art. The samefunctions can be fulfilled by different technological solutionsas the technology changes. To provide an extreme example,

Figure 12. Functional requirements. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]

320 PIASZCZYK

Systems Engineering DOI 10.1002/sys

Page 64: Base de datos EBSCO

messages that were transported by pigeons many years agocan now be transported by radio transmissions. The opera-tional views and functional architecture of the communicationsystem employing either would be the same. Of course,performance defined as time of message delivery is dramati-cally better with the radio.

The important lesson to draw from this example is that thefunctional architecture can remain unchanged while thephysical architecture changes. This is significant in address-ing complexity. Going further, as new technologies becomeavailable for specific parts of the system, we want to be ableto replace the old technologies without worrying about theother parts of the system. To facilitate this, we want to isolateparts of the system by creating modules that are as looselycoupled as possible. Creation of the functional architecturewas a major step in this direction. In that part of the process,we identified the “smallest” components of the system assystem functions. In real world, we don’t build systems frommodules that can only perform a single function. Each systemmodule usually combines a few of the system functionstogether with their inputs and outputs. As already mentioned,we want to lump together functions with many mutual inter-actions within one module and minimize the number ofinteractions external to the module. This will make it easierto replace modules when technology changes.

One more step is necessary to complete our quest to designthe system. We need to develop a diagram of the systemconsisting of modules and their interfaces. We shall call thisdiagram system architecture. Each module in the systemarchitecture, as you might already be guessing, consists of orimplements a number of system functions. The interfaces arerepresented by lines that implement a number of system dataflows between functions contained in different modules. Thephysical implementation of the system that we will call aphysical architecture, actually replaces the modules with

hardware, software and “people-ware” entities that performthe functions contained in them.

Figure 13 presents the system architecture for our systemwith one specific subsystem for each major system function.In DoDAF, this block diagram is called an SV-1. Each sub-system is called a system entity in DoDAF SV-1, or a systemfor short. Each system entity performs a number of (or a partof one) system functions. System interfaces between systementities accommodate system data exchanges between systemfunctions from the SV-4 Data Flow Diagram. System inter-faces with the operators are also identified.

It is to be noted that, in real world applications, multipleSV-1 will be needed for multiple levels. Each level is a wholeconsisting of multiple parts. The whole has emergent func-tional behaviors that are not exhibited by the parts. The partsdo not inherit the properties of the whole. First SV-1 is builtat the system level, showing the system decomposition intomajor segments, next a separate SV-1 for each segment showsthe segment’s decomposition into systems, then still anotherset of SV-1 showing subsystems, and so on. Normally, thehigher layer is defined to the maximum extent possible beforemoving on to the design of the next layer, although it is notinconceivable that changes will have to be made to the higherlevel as the details of the next layer are discovered to conflictwith one another. The SV-4 may also require refinement. Theiteration continues until the system entities can be built fromexisting components that already provide the functions andinterfaces required. Such components may be available asCommercial Off the Shelf (COTS) or Government Off theShelf (GOTS) items.

DoD Acquisition uses the term System of Systems (SoS)for systems integrated from elements developed under sepa-rate programs of record with independent funding and man-agement. This is a somewhat artificial term that is useful inDoD acquisition practice. In systems engineering, all systems

Figure 13. SV-1, System Interfaces Diagram, presents the modular architecture of the system showing system entities and interfaces betweenthem.

MODEL BASED SYSTEMS ENGINEERING WITH DoDAF 321

Systems Engineering DOI 10.1002/sys

Page 65: Base de datos EBSCO

are systems of systems and all systems of systems are systems.In our simple example, the “Sensor,” “Tracker,” and“Weapon” could be independent systems under separate man-agement. However, this will hardly matter for the purpose ofour discussion that doesn’t concern itself with the program-matic acquisition issues which are out of scope of this paper.

Figure 14 shows the allocation of system functions tosystem entities with a matrix. Again, this matrix is not part ofthe DoDAF 1.0 or 1.5 sanctioned set of views, but in DoDAF2.0 it has been added as SV-5b [DoDAF WG, 2009]. For thetime being, it can be built in a spreadsheet. This matrix is usedto keep track of the system functions allocations to systementities. It allows one to establish the traceability betweenfunctional requirements that were derived in the foregoingand system requirements sets that will be derived next.

11. DERIVING SYSTEM REQUIREMENTSFROM THE SYSTEM VIEWS

In this section, a model-centric view of the system is onceagain converted into a set of system requirements. Using theSV-1, system requirements can now be derived for eachsystem entity and its system interfaces. For each system entity,the corresponding system entity functional requirements aregenerated by listing the system functions that were allocatedto this entity, i.e., “The System Entity A shall perform SystemFunction B,” etc. Similarly, system entity interface require-ments can be written for each of the corresponding interfacesbetween all the system entities as identified in the SV-1, e.g.,“The System Entity C shall interface with System Entity Dvia System Interface E” or “System Entity F shall provideOperator Interface H.” If the human interfaces shown in theSV-1 have already been identified in the OV-2, OV-4, andOV-5, the SV-1 has to be compatible with these prior repre-

sentations. The major new development compared with theOVs is that now we are finally showing the internal details ofthe material system we are designing. While the OperationalViews treated the system as a “black box,” the SV-1 shows thesystem as a “white box” with its interior workings.

In a manner analogous to the OV-6c, the time sequence ofsystem data exchanges between system functions can beshown with a sequence diagram called SV-10c, System EventTrace. In this diagram, each system entity gets its own time-line. The SV-10c can be used to double check the SV-1 bytracing the flow of system data exchanges between the systementities in both diagrams and making sure they are consistent.The system data exchanges in the SV-10c must flow acrossthe interfaces seen in the SV-1. The two diagrams must alsouse the same terminology.

Another useful tool is the SV-10b, System State Diagramthat shows the states the system can reside in and all possiblestate transitions. Of course, for a complex system the statediagram can get very complicated. A simple logic gate is in adifferent state for each combination of zeros and ones. Acomputer system contains billions of logic gates. Thus, theo-retically, today’s systems have zillions of states and zillionsof transitions. Portraying all these with a diagram is notpossible. A developer will typically select the most importantmacro states that she or he wants to focus on and show theseselect few states in the SV-10b.

SV-10a, Systems Rules Model, is used to represent con-straints on the way the system operates. Some functions maybe executed conditionally, only if certain conditions are met.This information is essential for understanding system behav-ior. Clearly, each rule shown in the SV-10a diagram must betranslated into a requirement that will have the form of acondition “If such and such, the system shall do such andsuch.” More than one SV-10a may have to be employed tocompletely characterize all the system rules at various levels.

Figure 14. Allocation of system functions to system elements. [Color figure can be viewed in the online issue, which is available atwileyonlinelibrary.com.]

322 PIASZCZYK

Systems Engineering DOI 10.1002/sys

Page 66: Base de datos EBSCO

The SV-10c should be double checked to make sure the timesequence does not violate any of the rules shown in thecorresponding SV-10a.

A short sample of system requirements derived throughthis process is shown in Figure 15. Again, the result of theprocess of requirements derivation from the SV-1 is a serial-ized description of the SV-1 in the form of a list. This list canbe used to plan the verification tests for the system require-ments. In the course of the system integration, system func-tional requirements are verified by demonstrating that at everylevel of assembly each system entity provides all the requiredfunctionalities with specifically designed tests. Performanceassociated with each functional requirement can be measuredin the same test event or yet another one designed for thispurpose. System entity interface requirements are verified byexercising the exchange of system data between the entities.The SV-4 and SV-1 assist us in this process by identifying allthe entities, all the functions, all the interfaces and all the dataexchanges. For completeness, DoDAF’s SV-6 can be used to

tabularize a list of attributes for every interface to guide thetest engineer in designing the appropriate interface require-ments verification tests.

12. IMPLEMENTING THE SYSTEM

The SV-2, Systems Communications Description, renamed toSystem Resource Flow Description in DoDAF 2.0, is shownin Figure 16 as a description of the physical architecture. Thephysical architecture is a physical implementation of theSV-1. SV-2 shows local and wide area networks, switches,hubs, and routers as well as sensors and weapons, displays,and database servers. Physical architecture consists of physi-cal components that represent the technological state of theart available to the system designer. In the implementation, asingle system entity in SV-1 can consist of more than onephysical component or the other way around.

Figure 15. Derived system requirements. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]

Figure 16. SV-2, Systems Communications Description, presents the physical architecture elements and their communications links. [Colorfigure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]

MODEL BASED SYSTEMS ENGINEERING WITH DoDAF 323

Systems Engineering DOI 10.1002/sys

Page 67: Base de datos EBSCO

Although SV-2 is not called the Physical View in DoDAF,it is the closest the framework comes to providing such aproduct. The term “Communications” in the name “SystemCommunications Description” betrays DoDAF’s C4ISR ori-gins that are being stretched here to our more general appli-cation. Beyond SV-2, the physical design details are usuallydescribed with structural and mechanical drawings, electricaland electronic schematics, and UML diagrams for software.

13. DERIVING PHYSICAL REQUIREMENTSFROM THE PHYSICAL VIEWS

This time around, the model-centric rendition of systemphysical implementation in SV-2 is converted to the list ofphysical requirements. Thus, the SV-2 is used to derive thephysical requirements shown in Figure 17.

This can be accomplished through a process completelyanalogous to the one described above for SV-1. Every physi-cal component corresponds to a combination of several sys-tem entities or a part of one. Likewise, the physical interfacescorrespond to collections of system interfaces defined in the

SV-1 or interfaces internal to a system entity if it consists ofseveral physical components. For every physical componentwe can derive “requirements” for each of its functions, per-formance characteristics and interfaces using the “shalls” asbefore. The result is a set of documents that we call specifica-tions for system components. These specifications representa set of characteristics that are input to the process of gener-ating component acceptance testing plans. Again, since thecomponents specifications were traceable to system require-ments which were traceable to functional requirements, andfunctional requirements were traceable to “operator” require-ments, which in turn were traceable to stakeholders’ require-ments, we thus achieve full traceability from physical systemand its component specifications to stakeholder’s require-ments.

The way to document the traceability of the physicalcomponents to system entities is through another table asshown in Figure 18. This table is not part of DoDAF, but againcan be built with a spreadsheet. This table is another link inthe requirements traceability chain. There is presently no toolthat we know of where all the traceability tables and architec-tural artifacts proposed in this paper can be built and stored.

Figure 17. Physical requirements. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]

Figure 18. Allocation of system elements to physical components. [Color figure can be viewed in the online issue, which is available atwileyonlinelibrary.com.]

324 PIASZCZYK

Systems Engineering DOI 10.1002/sys

Page 68: Base de datos EBSCO

A combination of several tools has to be used to reach thisobjective. However, this is not so difficult and can be donequite readily.

The analysis is completed when all the system functionshave been allocated to specific physical components, all sys-tem data exchanges are allocated to specific physical inter-faces, and every stakeholder requirement is traceable to aphysical component with identified functional and perform-ance requirements. The architectural models provide the con-text for the requirements and can be helpful in assessingsystem behavior resulting from the interactions between sys-tem functions. They also support integration tests and require-ments verification plans by providing details of subsystemarrangements into systems to the integrators and verificationplanners.

14. SUMMARY AND CONCLUSIONS

The paper presented an example of an implementation of theMBSE employing DoDAF artifacts for guidance. This illus-trative example shows how to derive the operational, func-tional, system, and physical requirements from the systemmodel represented with DoDAF views. The discussion stepsthrough a system model development building a sequence ofthe architectural views (operational, functional, system, andphysical) and the corresponding requirements are derived viaa systematic iterative process that tightly couples the modeldevelopment with the requirements analysis. The require-ments generation as presented here could even conceivablybe automated with a computer aided model based systemsengineering tool, freeing the developer from the actual typing.

Throughout the presentation, we emphasized requirementstraceability. Traceability is traditionally used to evaluate theimpact of any top level originating requirements changes.Traceability helps avoid requirements creep. Traceability isalso helpful in evaluating the effects of shortcomings foundin requirements verification testing. In combination with re-quirements allocation, traceability enables subsystem testplanning. Traceability is typically managed with one of thewidely used requirements management tools. Here, we havediscussed the contributions that the system model built withDoDAF views can make in this effort. The resulting model,architectures, views, and requirements are traceable to oneanother, and their mutual consistency is a significant argu-ment in favor of quality of the requirements derived with theproposed methodology.

The DoDAF-based MBSE approach presented in this pa-per can support and guide the SE process from requirementsgeneration and design through integration, verification, andvalidation. Architectural views and requirements listings gen-erated are available for input to the planning of integration,test, and evaluation (IT&E), providing insight into the in-tended structure of the system as well as its operation. Usecase scenarios and operational sequence diagrams are veryhelpful for developing verification and validation plans. Thetest cases can be traced through the OV and SV diagrams tomake sure they make sense. The operator activities shown inthe Operational Views need to be exercised by the testers todemonstrate that the system provides the corresponding func-

tionalities within the desired performance range. Data ex-change messages communicated between system functionsprovided in the System Views should be exercised to demon-strate proper functioning of the interfaces. The path of datafrom source to destination can be traced on an SV-1 andSV-10c.

Model Based Systems Engineering with graphical DoDAFartifacts simplifies communications between the stakehold-ers, systems engineers, and the subject matter experts. Thedeep knowledge of system behavior gained early in the devel-opment through the modeling performed with the views canhelp prevent later problems in the detailed design and tests.

Complexity of modern systems that integrate humans,software, and hardware to address the frequently conflictingneeds and constraints makes requirements engineering in-creasingly difficult to manage. Dealing with this complexityrequires a complete revision of approaches and methods ofsystems engineering to achieve usable, reliable, and cost-ef-fective solutions to the problems that are becoming more andmore difficult. At its time, Model Driven Architecture wasused to overcome the novel challenges presented by softwaredevelopment. Today, for all practical purposes, there are nosystems that do not incorporate software, and there are nosystems that consist of software alone. Unprecedented inte-gration of software and people encountered in enterpriseapplication development created a need for development ofmore encompassing, “holistic” thinking, away from the silos.This need drove the development of architectural frameworksthat paved the way to modern service-oriented applicationssupporting global enterprises operating over the Internet. Thepresent day challenge is to engineer integration of hardware,software, and human operators that depend on one another inmany complicated ways. The old ways won’t do anymore.

REFERENCES

I. Bailey, F. Dandashi, H.-W. Ang, and D. Hardy, Using systemsengineering standards in an architecture framework, CapstoneReport, Naval Postgraduate School, Monterey, CA, 2005.

M.P. Bienvenu and K.A. Goodwin, The DoD AF views as require-ments vehicles in an MDA systems development process, 2004Command and Control Research and Technology Symposium.

M.P. Bienvenu, I. Shin, and A.H. Levis, C4ISR architectures: III. Anobject-oriented approach for architecture design, Syst Eng 3(4)(2000), 288–312.

D. Buede, Elevator system case study, operational system, StevensInstitute of Technology, Hoboken, NJ, 1995.

S.R. Childers and J. Long, A concurrent methodology for the systemengineering design process, Vitech, Vienna, VA, 1994.

D.L. Clark, D.M. Howell, and C.E. Wilson, Improving naval ship-building project efficiency through rework reduction, Thesis,Naval Postgraduate School, Monterey, CA, September 2007.

H.E. Crisp, Systems engineering vision 2020, Version 2.0.3, IN-COSE, Seattle, WA, September 2007.

C4ISR WG, C4ISR Architecture Framework, Version 2.0, C4ISRArchitecture Working Group, Department of Defense, Washing-ton, DC, December 18, 1997.

T. DeMarco, Structured analysis and system specification, PrenticeHall, Upper Saddle River, NJ, 1979.

MODEL BASED SYSTEMS ENGINEERING WITH DoDAF 325

Systems Engineering DOI 10.1002/sys

Page 69: Base de datos EBSCO

E.W. Dijkstra, Selected writings on computing: A personal perspec-tive, Springer Verlag, New York, 1982, pp. 60–66.

DoDAF WG, DoD architecture framework, Version 1.0, DoD Archi-tecture Framework Working Group, Department of Defense,Washington, DC, August 15, 2003.

DoDAF WG, DoD architecture framework, Version 1.5, DoD Archi-tecture Framework Working Group, Department of Defense,Washington, DC, April 23, 2007.

DoDAF WG, DoDAF architecture framework, Version 2.0, DoDArchitecture Framework Working Group, Department of De-fense, Washington, DC, May 28, 2009.

J.A. Estefan, Survey of Model-Based Systems Engineering (MBSE)methodologies, Rev. B, INCOSE-TD-2007-003-01, MBSE In-itiative, International Council on Systems Engineering, Seattle,WA, May 23, 2008.

H.P. Hoffman, Harmony-SE, SysML deskbook, Model Based Sys-tems Engineering with Rhapsody, Rev 2.0, IBM, Armonk, NY,October 12, 2006.

I. Hooks, Why Johnny can’t write requirements, Proceedings of theAIAA Conference, 1990.

IDD DID, Interface Design Description (IDD), DI-IPSC-81436,http://www.everyspec.com or http://en.wikipedia.org/wiki/MIL-STD-498, 2010.

INCOSE, Systems engineering handbook, v3.2, INCOSE, Seattle,WA, 2010.

IRS DID, Interface Requirements Specification (IRS), DI-IPSC-81434, http://www.everyspec.com or http://en.wikipedia.org/wiki/MIL-STD-498, 2010.

JCIDS, Chairman of the Joint Chiefs of Staff Instruction, CJCSI3170.01G, Joint Capabilities Integration and Development Sys-tem, Department of Defense, Washington, DC, March 1, 2009.

A.H. Levis and L.W. Wagenhals, C4ISR architectures: I. Developinga process for C4ISR , Syst Eng 3(4) (2000), 225–247.

MoD, Ministry of Defence Architecture Framework, London, UK,http://www.modaf.org.uk/ 2010.

NATO, NAF Version 3.0, Brussels , Belgium,http://www.nhqc3s.nato.int/ 2010.

NIST, Draft Federal Information Processing Standards Publication183, Announcing the Standard for Integration Definition forFunction modeling (IDEF0), National Institute of Standards andTechnology, Gaithersburg, MD, December 21, 1993.

OMG, SysML specification, Object Management Group, Needham,MA, May 2006, http://www.omg.org/spec/SysML.

OMG, Unified Modeling Language, Object Management Group,Needham, MA, February 2007, http://www.omg.org/spec/UML.

S.J. Ring, D. Nicholson, and J. Thilenius, An activity-based meth-odology for development and analysis of integrated DoD archi-tectures, 2004 Command and Control Research and TechnologySymposium.

SDD DID, Software Design Description (SDD), DI-IPSC-81435,http://www.everyspec.com or http://en.wikipedia.org/wiki/MIL-STD-498, 2010.

SRS DID, Software Requirements Specification (SRS), DI-ISPC-81433, http://www.everyspec.com or http://en.wikipedia.org/wiki/MIL-STD-498, 2010.

SSDD DID, System/Subsystem Design Description (SSDD), DI-IPSC-81432, http://www.everyspec.com or http://en.wikipedia.org/wiki/MIL-STD-498, 2010.

SSS DID, System/Subsystem Specification (SSS), DI-IPSC-81431,http://www.everyspec.com or http://en.wikipedia.org/wiki/MIL-STD-498, 2010.

TOGAF, The Open Group Architecture Framework, Burlington,MA, http:/ /www.opengroup.org/architecture/togaf9-doc/arch/.2010.

UPDM WG, Unified Profile for DoDAF and MoDAF,http://www.updm.com/index.htm.2010.

Vitech, Systems Engineering and Architecting Technical Papers,Vienna, VA, http://www.vitechcorp.com/support/papers.php2010.

L.W. Wagenhals, I. Shin, D. Kim, and A.H. Levis, C4ISR architec-tures: II. A structured analysis approach for architecture design,Syst Eng 3(4) (2000), 248–287.

E. Yourdon, Structured analysis wiki, http://www.your-don.com/structanalysis/wiki/index.php?title=Introduction.

E. Yourdon and L. Constantine, Structured design, Yourdon Press,Upper Saddle River, NJ, 1975.

J.A. Zachman, A framework for systems architecture, IBM Syst J26(3) (1987), 454–470.

H. Zimmerman, OSI reference model—the ISO model of architec-ture for open systems interconnection, IEEE Trans Commun ColCom-28(4) (April 1980), 425–432.

Chris Piaszczyk is INCOSE Certified Systems Engineering Professional (CSEP) as well as a Microsoft CertifiedSystems Engineer (MCSE). He is also a New York State Licensed Professional Engineer (PE). In the course of hisemployment within the aerospace industry, Chris enjoyed a career spanning analysis and design applications from lowearth orbit spacecraft to high energy physics particle accelerators. His systems engineering experience includesstructural dynamics analysis and design, fatigue and fracture analysis and design, systems optimization, reliability,availability and maintainability, requirements analysis, and systems architecturing. Chris holds a doctorate in AppliedMechanics from the Polytechnic Institute of New York and a master’s degree, also in Applied Mechanics, from thePolytechnic Institute of Warsaw in Poland.

326 PIASZCZYK

Systems Engineering DOI 10.1002/sys

Page 70: Base de datos EBSCO

Copyright of Systems Engineering is the property of John Wiley & Sons, Inc. and its content may not be copied

or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission.

However, users may print, download, or email articles for individual use.

Page 71: Base de datos EBSCO
Page 72: Base de datos EBSCO
Page 74: Base de datos EBSCO
Page 75: Base de datos EBSCO
Page 76: Base de datos EBSCO
Page 77: Base de datos EBSCO
Page 78: Base de datos EBSCO
Page 79: Base de datos EBSCO
Page 80: Base de datos EBSCO
Page 81: Base de datos EBSCO
Page 82: Base de datos EBSCO
Page 83: Base de datos EBSCO
Page 84: Base de datos EBSCO
Page 85: Base de datos EBSCO
Page 86: Base de datos EBSCO

Copyright of Ateneo Law Journal is the property of Ateneo Law Journal and its content maynot be copied or emailed to multiple sites or posted to a listserv without the copyright holder'sexpress written permission. However, users may print, download, or email articles forindividual use.

Page 87: Base de datos EBSCO

38

Luis Fernando BarónCandidato a doctor en Ciencias de la Información, Universidad de Washington, Estados Unidos. Profesor de la Universidad ICESI, Cali, Colombia. Correspondencia: University of Washington, Box 352840, Seattle, WA 98195, Estados Unidos. Correo electrónico: [email protected].

Ricardo Gómez Doctor en Comunicación, Cornell University. Profesor asistente de la Escuela de Información, Universidad de Washington, Estados Unidos. Correo electrónico: [email protected].

De la infraestructura a la apropiación social:

panorama sobre las políticas de las tecnologías

de informacióny comunicación (tic)

en Colombia

Origen del artículoEste artículo hace parte del proyecto de investigación Acceso público a tecnologías de comunicación e informa-ción en Colombia. El proyecto tuvo una duración de dos años (de junio de 2010 a junio de 2012) e incluyó las siguientes instituciones participantes: Tascha, Centro de Investigación de la Escuela de Información de la Universidad de Washington; Universidad Icesi (Cali), y Fundación Colombia Multicolor. El equipo estuvo integrado por Ricardo Gómez (director), Luis Fer-nando Barón (coinvestigador), Mónica Valdés (coordi-nadora del trabajo de campo) y Lady Otálora (asistente de investigación). Los autores agradecen el apoyo de Lady Otálora para la elaboración de este texto.

Recibido: Febrero 12 de 2012 Aceptado: Mayo 14 de 2012

Submission Date: February 12th, 2012Acceptance Date: May 14st, 2012

Código SICI: 2027-2731(201212)31:61<38:DLIAAM>2.0.CO;2-N

Da infraestrutura para a apropriação social:panorama sobre políticas de tecnologias de informação e comunicação (TIC) na Colômbia

From Infrastructure to Social Appropriation:An Overview of Information and Communication Technologies (ICT) Policies in Colombia

Page 88: Base de datos EBSCO

39

Palabras clave: TIC, Colombia, bibliotecas, librerías, cibercafés.Descriptores: Tecnologías de la información y la comunicación – política pública – Brecha digital – Espacio público, Cambio social, Colombia.

O artigo analisa a trajetória das políticas públicas na matéria de tecnologias de informação e comunicação (TIC) na Colômbia, particularmente aquelas relacionadas a programas de acesso público às TIC. O acesso às TIC através de locais como bibliotecas, telecentros e cibercafés pode ser poderosa oportunidade para reduzir a fenda digital e contribuir para a equidade e a mudança social. Colômbia é pioneira na América latina na produção de políticas públicas em TIC que tem se caracterizado pelo alto grau de participação das organizações sociais e a ênfase em questões que vão além dos assuntos de infraestrutura e conectividade. Este artigo não só apresenta análise sistemático dos processos de produção e aplicação das políticas públicas no campo das TIC, mas também da participação dos órgãos do governo, organizações sociais e empresas privadas na discussão e elaboração. Também, apresenta análise sobre as relações entre políticas públicas e as possibilidades que tem o acesso público às TIC para contribuir ao desenvolvimento humano e a mudança social.

This article presents an overview of national policies related to public access to information and communication technologies (ICT) in Colombia. Public access to ICT through libraries, telecentres and cybercafés can be a powerful tool to bridge the digital divide and to contribute to social and political equity and change. While the country was one of pioneers of the region in regard to ICT policy and has exhibited an unusual degree of public consultation, citizen participation and emphasis on social issues beyond basic infrastructure and connectivity, many challenges remain to ensure equitable access and effective social appropriation of ICT to contribute to human development in the country. By bringing together the trajectory and contributions of government policies and social organiza-tions in the country, this article presents a comprehensive and systematic analysis of the public policy environment in Colombia, and how this is enabling (or hindering) the use of ICT for human development and social change.

Este artículo analiza la trayectoria de las políticas públicas en materia de tecnologías de información y comunica-ción (TIC) en Colombia, en particular las relacionadas con programas de acceso público a TIC. El acceso a TIC a través de sitios como bibliotecas, telecentros y cibercafés puede ser una poderosa oportunidad para acortar las brechas digitales y contribuir a la equidad y al cambio social. Colombia ha sido pionero en América Latina en la producción de políticas públicas en TIC, que se han caracterizado por el alto grado de participación de orga-nizaciones sociales y por el énfasis que han puesto en aspectos que van más allá de asuntos de infraestructura y conectividad. Este artículo no solo presenta un análisis sistemático de los procesos de producción y aplicación de políticas públicas en el campo de TIC, sino, también, de la participación de entidades del gobierno, de organizaciones sociales y de empresas privadas en su discusión y elaboración. También, presenta un análisis sobre las relaciones entre las políticas públicas y las posibilidades que tiene el acceso público a TIC para contribuir al desarrollo humano y el cambio social.

Palavras-chave: TIC, Colômbia, bibliotecas, livrarias, cibercafés.Search Tags: D Tecnologia da informação e comunicação – Política pública – Exclusão digital – Espaço público, mudança social, Colômbia.

Keywords: ICT, Colombia, Libraries, Telecentres, Cybercafés.Search Tags: Information and communications technologies – Public policy – Digital divide – Public space, Social change, Colombia.

Resumen

Resumo

Abstract

Page 89: Base de datos EBSCO

40

De la infraestructura a la apropiación social: panorama sobre las políticas de las tecnologías de información y comunicación (tic) en Colombia

Introducción

El desarrollo de las políticas públicas en TIC en Colombia es de interés para otros países latinoamericanos, pues tiene por lo menos dos elementos únicos. En primer lugar, se encuentra la significativa participación de diversas organiza-ciones de la sociedad civil (universidades, ONG, organizaciones de base), tanto en el desarrollo y adecuación de las TIC, como en los debates de diseño y ejecución de las políticas públicas que han orientado los planes y programas del Estado en esta materia. En segundo lugar, se destaca la importancia que ha tenido la relación TIC, educación y conocimiento, y el importante lugar que ocupan las bibliotecas, los centros y las organizaciones culturales en el desarrollo de redes locales y nacionales, que han fortalecido no solo las experiencias de acceso público a las TIC, sino que han servido como espacios de intercambio y concertación de proyectos entre entidades del Estado, el sector privado y las organizaciones de la sociedad civil.

Sin embargo, hay una tercera característica en materia de políticas de TIC en Colombia que es compartida por la mayoría de países latinoa-mericanos. Se trata de la poca atención que se ha prestado a los cibercafés y a los procesos de forma-ción y acceso a las TIC que podrían brindar estas experiencias. Los cibercafés no solo representan,

de lejos, la mayoría de los centros de acceso público en el país, sino que son los espacios preferidos por los usuarios para sus actividades de información, comunicación y educación. A excepción de Perú, donde la experiencia de cabinas públicas ha sido un híbrido entre telecentros y cibercafés desde su fundación, otros países latinoamericanos muestran la misma desconexión entre las políticas públi-cas en materia de TIC y las experiencias de los numerosos cibercafés que se han multiplicado en pueblos y ciudades.

Este artículo presenta algunos resultados ori-ginales de una investigación sobre acceso público a TIC realizado por la Universidad de Washington, con el apoyo de la Universidad Icesi y la Fundación Colombia Multicolor. Es resultado del análisis de fuentes primarias y secundarias, que incluyeron: entrevistas con expertos (10), entrevistas con ope-radores o encargados de centros de acceso público (100), entrevistas de historias personales (10), encuesta a los usuarios (1.000) y grupos focales (6). Lo anterior, acompañado de una revisión bibliográfica y un análisis a las políticas públicas sobre las TIC en Colombia.

El texto está organizado de la siguiente manera: en primer lugar, se presenta una pano-rámica de la trayectoria seguida por las políticas públicas sobre TIC en Colombia. Después, se hace

Luis Fernando Barón Ricardo Gómez

Page 90: Base de datos EBSCO

41

Luis Fernando Barón / Ricardo Gómez | De la infraestructura a la apropiación social

una mirada más detallada a los aspectos de la política pública relacionados con el acceso público a las TIC (cibercafés, telecentros y bibliotecas). En seguida se presenta un análisis crítico a la trayectoria de las políticas públicas en Colombia, y se termina con una serie de retos que enfrentan las políticas en el corto y mediano plazos.

Desarrollo de las políticas públicasTIC en Colombia

Los primeros esfuerzos por conectar a Colombia a internet son de mediados de los años ochenta, cuando las universidades Nacional, de los Andes y del Norte realizaron conexiones locales e hicieron las primeras pruebas, con el interés de intercambiar conocimientos. Tamayo, Delgado y Penagos (2007) proponen tres momentos en la historia de internet en el país. El primer momento, comprendido entre 1986 y 1993, se caracteriza por tener como único actor en el desarrollo de las TIC a las universidades, que son las que plantean los principios de la organización del campo de internet, privilegiando la gestión de conocimientos.

La entrada del Estado marca el comienzo del segundo momento, que va de 1994 hasta el 2000. En este periodo, aunque se mantiene el enfoque propuesto por las universidades, el Estado toma

el papel protagónico en el desarrollo de las TIC, al proporcionar no solo infraestructura, recursos y aparatos, sino, también, al producir un marco simbólico sobre la llegada y la masificación de internet, marcado por un enfoque desarrollista (Tamayo et al., 2007).

El tercer momento inicia en el 2001 y va hasta la publicación del texto de Tamayo et al. en el 2007. Durante este periodo, tanto el Estado como las entidades privadas fusionan los enfoques cul-turales y educativos que originaron el campo del internet con las fuerzas económicas que empiezan a imponerse en la organización de las TIC en el país. Un cuarto momento, sugerido por nuestra indagación, estaría marcado por el debate y el trabajo conjunto entre instituciones del Estado (primordialmente los ministerios de Comunica-ciones, Cultura y Educación), organizaciones pri-vadas (proveedores de servicios) y organizaciones sociales (ONG, universidades y grupos de base) en la construcción de políticas y planes mas integrales de desarrollo de TIC en el país.

Por su parte, el Consejo Nacional de Política Económica y Social (CONPES), máxima autoridad en Colombia en materia de planeación, plantea dos fases en los avances de la política de TIC en la última década. Una primera, entre el 2000 y el 2006, cuya prioridad fue ampliar el acceso comunitario a

Page 91: Base de datos EBSCO

Signo y Pensamiento 61 · Agendas | pp 38 - 55 · volumen XXXI · julio - diciembre 2012

42

servicios básicos de voz e internet, y dotar de com-putadores a sedes educativas públicas. Y la segunda, entre 2006 y 2010, en la que se buscó fortalecer la provisión de accesos de banda ancha y los procesos de apropiación de las TIC en el ámbito educativo, haciendo énfasis en las instituciones públicas, con el fin de involucrar al sector productivo, especialmente

a las micro-, pequeñas y medianas empresas, y a las regiones, como forma de incentivar el uso y aprovechamiento de las TIC (2010).

La tabla 1 presenta una síntesis del recorrido seguido por las políticas y programas de gobierno vinculados con el desarrollo de las TIC en el país entre 1994 y 2010.

Tabla 1. Políticas públicas en Colombia que regulan el campo de las tecnologías de información

y comunicación entre 1994 y 2011

Planes y programas de política en TIC

Objetivos

1994: Política Nacional de Ciencia y Tecnología 1994-1998, Conpes 2739.

Desarrollar en el país la capacidad para utilizar la informática y los computadores en la educación y la ciencia.

1998: Plan Nacional de Desarrollo 1998-2002: ‘Cambio para construir la paz’.

Promover el desarrollo de las telecomunicaciones (especialmente infraestructura) para alcanzar la paz, para aumentar la productividad y la competitividad, y para consolidar el proceso de descentralización.

1999: Programa ‘Compartel’. Proveer teléfonos comunitarios en localidades sin servicio de telefonía básica.

1999: Programa ‘Computadores para educar’, Conpes 3063.

Promover el acceso a las TIC, mediante la recolección y acondicionamiento de computadores para entregarlos a instituciones educativas públicas del país.

2000: ‘Agenda de conectividad: el salto a internet’, Conpes 3072.

Masificar el uso de las TIC para aumentar la competitividad del sector productivo, modernizar las instituciones públicas, y de gobierno, y ampliar el acceso a la información.

2000: Decreto 2324, relacionado con el Programa ‘Computadores para educar’.

Desarrollar un plan de distribución, uso y apropiación efectiva de la tecnología para las instituciones beneficiarias de los equipos (Ministerio de Educación e instancias locales).

2000: Programa ‘Compartel. Internet social’ (uno de los 30 programas establecidos por la ‘Agenda de conectividad’).

Proveer el servicio de internet en las cabeceras municipales del país que carecen de este servicio.

2002: ‘Lineamientos de política detelecomunicaciones sociales 2002-2003’, Conpes 3171.

Reducir la brecha de acceso y universalizar los servicios de telecomunicaciones.

2006: Plan Nacional de Desarrollo 2006-2010: ‘Estado comunitario: desarrollo para todos’.

Alcanzar la inclusión digital a través de la continuidad de los programas de acceso y servicio universal, además de incorporar el papel de las TIC como motor de desarrollo transversal del Estado.

2007: Plan ‘Visión Colombia II centenario: 2019’.

Las estrategias “Generar una infraestructura adecuada para el desarrollo”, y “Avanzar hacia una sociedad informada” incorporan las metas de servicio y acceso universal a las telecomunicaciones y el desarrollo de capacidades para el uso y apropiación de TIC.

Page 92: Base de datos EBSCO

43

Luis Fernando Barón / Ricardo Gómez | De la infraestructura a la apropiación social

Planes y programas de política en TIC

Objetivos

2007: ‘Lineamientos de política para reformular el programa Compartel de telecomunicaciones sociales’, Conpes 3457.

Reformular el programa ‘Compartel’ como respuesta estratégica a la dinámica del mercado, con el fin de consolidar los resultados alcanzados.

2008: Plan Nacional de TIC: ‘En línea con el futuro de 2008-2019’.

Asegurar que para el 2019 todos los colombianos estén conectados e informados, haciendo uso eficiente y productivo de las TIC, para una mayor inclusión social y competitividad.

2009: Ley 1341.

Esta ley convirtió el Ministerio de Comunicaciones en Ministerio de Tecnologías de la Información y las Comunicaciones, para masificar el acceso y uso de las TIC, el impulso a la libre competencia, el uso eficiente de la infraestructura y la protección de los derechos de los usuarios.

2009: Ley 1286. Entre otros aspectos, esta ley transforma el Sistema Nacional de Ciencia Tecnología en el Sistema Nacional de Ciencia, Tecnología e Innovación (SNCTI).

2010: “Lineamientos de política para la continuidad de los programas de acceso y servicio universal a las tecnologías de la comunicación y la información”, Conpes 3670.

Definir los lineamientos de política para la continuidad de las iniciativas que promueven el acceso, uso y aprovechamiento de las TIC, de manera coordinada entre los programas del Ministerio de TIC y demás instancias del Gobierno.

2010: Ley de Bibliotecas 1379.La Red de Bibliotecas Públicas prestará el acceso a internet y la alfabetización digital como uno de sus servicios básicos.

Fuente: elaboración propia a partir de revisión documental.

El acceso público a TIC en las políticas públicas

Con la creación de los programas ‘Compartel’ y ‘Computadores para educar’, en 1999, y con el diseño de la ‘Agenda de conectividad’, se produjo un gran impulso al desarrollo de las TIC en Colombia. Debido a los procesos de transferencia tecnológica, a las dificultades de acceso a computa-dores personales y al desarrollo de infraestructura de telecomunicaciones en el país, estos programas han tenido un fuerte enfoque de acceso en espacios públicos, primero mediante los telecentros y las instituciones educativas, y más adelante por medio de bibliotecas, alcaldías y juzgados.

‘Compartel’ en un primer momento tenía como objetivo dar acceso a teléfonos comunitarios en localidades que no contaban con el servicio de telefonía básica —sobre todo el área rural—, para facilitar el desarrollo de programas educativos, culturales y de salud por parte del Estado y de la comunidad. Un año después se puso en marcha

el programa ‘Compartel. Internet social’, el cual implementa la creación de los centros comunitarios de acceso a internet (CACI) o telecentros en cabece-ras municipales y centros urbanos del país, ubicados en los estratos 1 y 2 (Conpes, 2010). Con el paso del tiempo, el programa ‘Compartel’ ha adoptado otras líneas: internet social y telecentros, conectividad en banda ancha para instituciones públicas, ampliación y reposición de redes de TPBC, y ampliación de redes de banda ancha con énfasis en micro-, peque-ñas y medianas empresas (Mipymes).

En el 2000 se publica el documento Conpes 3072 “Agenda de conectividad: el salto a inter-net”, con el objetivo de masificar el uso de las TIC y, con ello, aumentar la competitividad del sector productivo, modernizar las instituciones públicas, y de gobierno, y socializar el acceso a la información. Lo anterior, bajo el supuesto que el manejo eficiente de la información facilitará la

Page 93: Base de datos EBSCO

Signo y Pensamiento 61 · Agendas | pp 38 - 55 · volumen XXXI · julio - diciembre 2012

44

adquisición, absorción y comunicación. Además de que su uso masivo permitirá crear un entorno económico atractivo y aumentará la participación en la que denominan la nueva e-conomía.

Para el logro de sus objetivos, la ‘Agenda’ ori-entó su acción hacia tres grupos prioritarios: comu-nidad, sector productivo y Estado, mediante seis estrategias; a saber: 1. acceso a la infraestructura de la información; 2. uso de TI en los procesos educa-tivos y capacitación en su uso; 3. uso de TI en las empresas; 4. fomento a la industria nacional de TI; 5. generación de contenido, y 6. Gobierno en línea. Por medio de la ‘Agenda de conectividad’, el Gobierno se propuso difundir información y conocimiento sobre las TIC para capacitar a las comunidades en el uso de estas tecnologías y fomentar su uso como herramientas educativas. Por otra parte, se propuso ofrecer acceso a las tecnologías de la información a la mayoría de los colombianos, a costos más asequibles.

En el 2000, mediante el Decreto 2324 se decidió integrar el programa de ‘Computadores para educar’ dentro de la Agenda Nacional de Conectividad, al destinar los computadores no solo a instituciones educativas públicas, sino a normales superiores, bibliotecas, casas de la cultura de naturaleza pública y telecentros que hicieran parte del programa de telecomunicaciones sociales. El Servicio Nacional de Aprendizaje (Sena) se hizo cargo del reacondicionamiento de equipos y de capacitar a los responsables en su utilización.

Sin embargo, las anteriores iniciativas resultaron insuficientes frente a los procesos de desarrollo y de uso que se estaban gestando en todo el territorio nacional. Esto se refleja en las críticas provenientes de ONG reconocidas en el ámbito de las comuni-caciones y la información, como Colnodo, que ha desempeñado un papel clave en el desarrollo de las TIC en Colombia, y en la coordinación de la Red Nacional de Telecentros. Al respecto (Rueda, Rozo & Rojas, 2006), también planteaban que hasta ese momento la acción estatal se había centrado en la mejoría de los índices de cubrimiento de la educación y de penetración de las TIC, mediante el desarrollo de infraestructura y equipamiento, dando mayor

importancia a los discursos técnicos y haciendo más difícil adelantar dinámicas participativas.

Desde mediados de la primera década de este nuevo siglo se produce una explosión de telecen-tros, la mayoría promovidos por organizaciones sociales e instituciones privadas y educativas. Su labor se centraba en apoyar proyectos comuni-tarios, en involucrar a las comunidades cercanas a los telecentros, en incentivar un mejor uso de estos espacios y en el fortalecimiento de capacidades técnicas y administrativas de sus operadores.

Estos contextos crean la necesidad de hacer mayor énfasis dentro de las políticas estatales en las dinámicas locales de acceso a las TIC, tal y como lo confirma el documento Conpes 3457 de 2007, en el cual se reformula el programa ‘Compartel’ y se hace énfasis en la necesidad de dar mayor relevancia al fomento del uso y la apropiación de las TIC por parte de la población beneficiada. El diagnóstico de este documento mostró que Colombia aún se encontraba en niveles inferiores al promedio latinoamericano en el acceso a algunos servicios, como internet-banda ancha, y en la masificación y uso de las TIC. Adicionalmente, se expresó que existía una brecha interna en relación con las condiciones de acceso y aprovechamiento de estas tecnologías. (Consejo Nacional de Política Económica y Social [CONPES], 2007).

De manera particular, el Conpes 3457 retomó los resultados de una evaluación contratada por el programa ‘Compartel’ al Centro de Estudios sobre Desarrollo Económico de la Universidad de los Andes en 2006. El estudio mostraba un impacto positivo en el nivel de ingresos de los individuos que asistían regularmente a los telecentros comu-nitarios. Sin embargo, señalaba la necesidad de promover una mayor utilización de los telecentros por parte de la población beneficiada, así como mayor articulación de la infraestructura de telecen-tros existente en las regiones con otros proyectos adelantados por el Gobierno nacional que no cuentan con infraestructura de TIC (como el Sena, la Federación Nacional de Cafeteros o el Ministerio de Agricultura, entre otros). El estudio también recomendaba actualizar la infraestructura tec-nológica, realizar actividades de mantenimiento

Page 94: Base de datos EBSCO

45

Luis Fernando Barón / Ricardo Gómez | De la infraestructura a la apropiación social

y mejorar la calidad de los servicios prestados en algunos telecentros.

Un plan nacional de TIC

Los siguientes documentos de planeación son la base de los lineamientos de las actuales políticas en el campo de las TIC en Colombia: el Plan Nacio-nal de Desarrollo 2006-2010 ‘Estado comunitario: desarrollo para todos’, el documento ‘Visión Colombia II centenario: 2019’. En particular, el Plan Nacional de Tecnologías de la Información y las Comunicaciones - Plan TIC 2008-2019 es el que recoge y articula las experiencias y avances rea-lizados en las políticas del país hasta ese momento. En el Plan se establece que:

[…] No basta con una buena dotación de

infraestructura y acceso, si ello no se refleja en

mejoras sustanciales en la calidad de vida, en equi-

dad, en cierre de brechas, en uso masivo. Se pone

de relieve la importancia de intensificar los procesos de

formación, capacitación, investigación, diseño y

diseminación de contenidos útiles a disposición de

los individuos. (Ministerio de Comunicaciones, 2008)

Tal como lo plantean Olga Paz Martínez (2006) y la Corporación Colombia Digital (2005),

hasta el 2006 no había una política nacional que agrupara las diversas iniciativas nacionales y locales que se realizan en el país frente al tema de TIC. Por medio del Plan se canalizaron las iniciativas relacio-nadas con las TIC en una sola política, que brinda lineamientos al Gobierno, los sectores económicos y la sociedad civil para trabajar con objetivos comunes (Ministerio de Comunicaciones, 2008).

El ‘Plan TIC’ fue desarrollado mediante un proceso participativo apoyado por un grupo inter-disciplinario de expertos colombianos, e involucró reuniones con muy diversos sectores sociales de varias ciudades del país y el uso de un foro virtual y una consulta abierta en internet. Además, se buscó alinear sus objetivos con la visión Colombia 2019, con el Plan Nacional de Desarrollo 2006-2010, con la Política Nacional de Competitividad, con el Plan de Ciencia y Tecnología, y con el Programa Estratégico de Uso de Medios y Tecnologías de la Información y la Comunicación (MTIC) en la Educación (Rodríguez, 2008). Sin embargo, aunque en el proceso participaron universidades, grupos de investigación y diferentes actores sociales, fueron varias las críticas por la escasa inclusión regional, por la precaria participación de organi-zaciones sociales de base y por el fuerte carácter empresarial y comercial del Plan (Jaillier, 2009). La tabla 2 presenta sus componentes centrales.

Tabla 2. Ejes y programas del ‘Plan TIC’

Eje Programas

Educación

Programa Nacional de Uso de Medios y Nuevas Tecnologías ‘Colombia aprende’

‘Computadores para educar’

Red Nacional Académica de Tecnología Avanzada (RENATA)

Programas del Sena

SaludPrograma de Telemedicina

Sistema de Información de la Protección (SISPRO)

Justicia Plan para la modernización tecnológica de la administración judicial

Page 95: Base de datos EBSCO

Signo y Pensamiento 61 · Agendas | pp 38 - 55 · volumen XXXI · julio - diciembre 2012

46

Eje Programas

Competitividad empresarial

Mipyme digital

Comercio electrónico

Seguridad informática

Comunidad

Programa ‘Compartel’

Cultura Nacional de TIC

Seguridad informática

Pacto Social Digital

Gobierno en líneaGobierno en línea

Intranet gubernamental

Marco regulatorio, investigación y desarrollo

Centro de Investigación y Formación de Alto Nivel en TIC

Centro de Bioinformática

Centro de Investigación en Excelencia Electrónica, Telecomunicaciones e Informática (ETI)

Observatorio (TIC)

Fuente: adaptado de Ministerio de Comunicaciones (2008).

En el eje de Comunidad del Plan TIC se sigue trabajando bajo el principio de promover el uso universal, mediante el acceso comuni-tario de algunos sectores, por medio de los programas de ‘Compartel’, pero combinado con mecanismos para incentivar la expansión de los operadores privados y acceso inalámbrico. Sin embargo, el documento reconoce que el programa ‘Compartel’ está en ese momento en proceso de redefinición de metas, objetivos y formas de operación. Los asuntos más destacados del proceso de reorientación son: 1. ajustes en la regionalización de sus proyectos: se proponía orientar los esfuerzos a una dimensión local, de acuerdo con las necesidades de sus entornos; 2. posicionar y difundir sus programas para mostrar que el programa propende por el acceso y apropiación de las TIC en las zonas más aparta-das y para los estratos más bajos; 3. hacer énfasis en la conectividad de las Mipymes, y 4. redefinir las alianzas y acuerdos con entes privados para asegurar la sostenibilidad de los telecentros una vez que el Estado retire su aportes.

Para cerrar esta trayectoria es necesario mencionar el documento Conpes 3670 para la continuidad de los programas de acceso y servicio universal a las tecnologías de la comunicación y la información, aprobado el 28 de junio de 2010 (Conpes, 2010). De acuerdo con la información que provee este documento, entre el 2005-2009, el Fondo de TIC (antes denominado Fondo de Comunicaciones) tuvo una inversión de $0,9 billo-nes, representado en diferentes programas, como las telecomunicaciones sociales (acceso universal a las TIC), Gobierno en línea, red pública de radio y televisión, correo social y otros programas insti-tucionales. Las inversiones en telecomunicaciones sociales representan más del 70% de los recursos del fondo. Y en este grupo se incluyen los progra-mas ‘Compartel’ y ‘Computadores para educar’.

Según el CONPES 3670, el programa ‘Com-partel’ ha beneficiado aproximadamente 22.540 sedes educativas, de las cuales 1.669 hacen parte de la estrategia de apertura de establecimientos educati-vos como telecentros. También, ha contribuido con 415 bibliotecas pertenecientes a la Red Nacional

Tabla 2. Ejes y programas del ‘Plan TIC’ (continuación)

Page 96: Base de datos EBSCO

47

Luis Fernando Barón / Ricardo Gómez | De la infraestructura a la apropiación social

de Bibliotecas Públicas y cerca de 78 casas de cultura. De igual forma, a través del programa ‘Computadores para educar’, se han beneficiado 292 bibliotecas, con 4.450 computadores y 124 casas de cultura, con 2.363 computadores. Estos cruces hacen que sea difícil rastrear el alcance de los programas de acceso público a TIC, en vista de que algunas veces se originan en iniciativas escolares que suponen dar servicio al público, y en otros casos se originan en iniciativas de acceso público que atienden a la población estudiantil.

Sin embargo, la política identifica dos ejes pro-blemáticos para la continuidad y fortalecimiento del desarrollo del acceso público a TIC. Por una parte, están las pocas fuentes complementarias para apoyar la continuidad de los servicios ya contrata-dos, y, por otra, los riesgos asociados con el cambio tecnológico: mejoramiento en la calidad del servicio y manejo de los computadores obsoletos.

Por lo anterior, el documento de política plantea las siguientes estrategias: a. la articulación con los gobiernos locales para la incorporación de la conectividad como un gasto recurrente en los presupuestos de las entidades territoriales; b. la gestión con organismos internacionales y/o el sector privado, con el fin de acopiar recursos destinados a la adecuación y mejoramiento de la infraestructura física y tecnológica, ampliación de cobertura y sostenibilidad del servicio en las bibliotecas públicas y casas de la cultura, y c. la priorización de los recursos necesarios por parte del Ministerio de Tecnologías de la Información y las Comunicaciones y/o el Ministerio de Cultura de sus respectivos marcos de gasto de mediano plazo.

Finalmente, la nueva Ley de Bibliotecas (Ley 1379 de 2010) dispone que todas las bibliotecas de la Red de Bibliotecas Públicas prestarán el acceso a internet y la alfabetización digital como uno de sus servicios básicos (Congreso de la República de Colombia, 2010), y para el 2015, el 100% de ellas deberá contar con conectividad a internet. Por estas razones, la nueva política establece que el Ministerio de TIC y el Ministerio de Cultura deben definir una estrategia que explore diferentes alternativas de conectividad y dotación del parque

tecnológico, tomando en cuenta tanto la evaluación de la situación actual y futura del mercado en las regiones, así como criterios técnicos, económicos y de calidad del servicio, con el objeto de determinar las responsabilidades respectivas.

Políticas públicas y organizaciones sociales

Tres fenómenos han motivado el interés y actividad de las organizaciones sociales de Colombia por las TIC: las necesidades de información, comunicación e intercambio de conocimientos; las relaciones con redes nacionales e internacionales, y los trabajos en muy diversos programas sociales y culturales. Estos procesos han motivado sus relaciones con entidades de gobierno del orden local, nacional e internacio-nal, así como su vinculación con el debate, diseño y aplicación de políticas públicas, en los ámbitos locales, nacionales e internacionales.

Uno de los actores más importantes en los procesos relacionados con la producción y aplica-ción de políticas públicas en Colombia es la Red Nacional de Telecentros. Esta iniciativa, confor-mada por líderes de telecentros, académicos, ONG e instituciones del Estado que trabajan en el tema de las TIC, tiene como objetivo apoyar procesos de gestión, participación y mejoramiento de las condi-ciones de las personas y comunidades, por medio de estrategias para fortalecer las competencias, el accionar de los telecentros y su sostenibilidad.

De manera particular, la Red trabaja por la definición y ejecución de planes y políticas que fortalezcan los telecentros y los actores sociales vinculados con las TIC en Colombia. La Red no solo ha desarrollado un mapa de experiencias en Colombia con información sobre sus características básicas, sino que ha sido la gestora de una serie de encuentros nacionales y regionales, y del desarrollo de espacios de debate e intercambio, de carácter local, nacional e internacional.

Otro ejemplo de alianzas entre instituciones privadas, públicas y educativas con impactos significativos en las políticas públicas fue el proyecto de intercambio de experiencias y cono-cimientos entre diversos actores de telecentros

Page 97: Base de datos EBSCO

Signo y Pensamiento 61 · Agendas | pp 38 - 55 · volumen XXXI · julio - diciembre 2012

48

iniciado en el 2006. Se trata de la “Gestión de conocimiento e intercambio de experiencias entre telecentros comunitarios y telecentros Compartel en Colombia”, en el que participaron Colnodo, la Universidad Autónoma de Occidente (UAO), la Fundación Telecentre.org y el programa ‘Com-partel’ (Casasbuenas, 2007). Entre los resultados destacados de este proyecto está el diseño de una metodología de apropiación social de telecentros, que sirvió como referente en el proceso de instalación y organización de los “nuevos telecentros”, o centros de acceso pertenecientes al programa ‘Compartel’, creados con el Conpes 3457 de 2007. Estos “nuevos telecentros” fueron ubicados en establecimientos educativos públicos, y prestaban sus servicios en una jornada contraria a la escolar. Se buscaban fortalecer con ofertas de formación, materiales, metodologías e intercambio de conocimientos (APC, TIC, y Paz Martínez, 2009).

Por otra parte, en el 2007, como parte de un proceso internacional promovido por la Asociación para el Progreso de las Comunicaciones (APC), organizaciones de la sociedad civil convocaron la primera consulta de políticas TIC en Colombia. La consulta fue organizada por Colnodo y APC, que estuvieron asociados con la Universidad Exter-nado de Colombia, el Observatorio de Gobierno, la Sociedad y Tecnologías de Información (Obser-vatics), la Red para la Gobernabilidad Electrónica Local -i-local y la Corporación Colombia Digital. Esta consulta tuvo como objetivo la promoción de espacios de diálogo sobre las políticas de TIC entre diversos sectores sociales, para desarrollar propues-tas que fueran tenidas en cuenta por el gobierno nacional (Colnodo, APC, Corporación Colombia Digital, & Observatorio de Sociedad Gobierno y Tecnologías de Información-Observatics, 2007). La consulta se propuso ampliar la discusión, aumentar la inclusión y el conocimiento de estas políticas en la agenda ciudadana; así, se diseñó como insumo para el ‘Plan TIC’ y como aporte al proceso de restructuración del Ministerio de Comunicaciones.

Las organizaciones sociales también han contribuido de forma significativa con los deba-tes y prácticas de apropiación social de las TIC.

Las reflexiones y experiencias de organizaciones como la Fundación Makaia sobre el tema de apropiación social han ayudado a entender mejor las relaciones entre las denominadas “brecha digital” y la “brecha social”. Es decir, que las desigualdades sociales en múltiples dimensiones también afectan significativamente el acceso y apropiación de las TIC, y se ven reflejadas en fenómenos que parecerían externos a estas cuestiones, como son las dimensiones culturales y contextuales de las comunidades (Botero, Rojas, Cadeac & Escobar, 2009).

Las bibliotecas como puntos clavepara el acceso público a TIC

Gracias a los trabajos conjuntos entre instituciones estatales y organizaciones sociales, Colombia se ha comprometido en diferentes eventos y con diversas entidades en desarrollar una sociedad de la informa-ción y del conocimiento incluyente y participativa. Aunque dichos lineamientos no son vinculantes, es decir, no son de obligatorio cumplimiento, ellos han sido muy útiles, porque, por una parte, han servido como herramientas para los procesos nacionales y la garantía de derechos y deberes ciudadanos de las organizaciones sociales del país, y, por otra, han servido como referente para que las instituciones gubernamentales mantengan un cierto balance entre las responsabilidades públicas y la prestación de servicios por parte del sector privado.

Entre los lineamientos internacionales más importantes en el sector TIC se encuentran los Objetivos de Desarrollo del Milenio (ODM) planteados en el 2000, y la Cumbre Mundial de la Sociedad de la Información —planteados en Ginebra, 2003, y Túnez, 2005—, los cuales potencializaron el sector de las TIC y han sido adoptados en diferentes políticas (como el Plan Nacional de Desarrollo 2006-2010, el documento ‘Visión Colombia II centenario: 2019’ o el Conpes Social 91), que han aligerado el paso de la conso-lidación de este campo, al ser planteadas como justificación de lineamientos o como planes de desarrollo a escalas nacional, regional y local.

Page 98: Base de datos EBSCO

49

Luis Fernando Barón / Ricardo Gómez | De la infraestructura a la apropiación social

De manera particular, las bibliotecas y sus redes han adoptado manifiestos y declaraciones que destacan la necesidad de adoptar y adaptar las TIC para su trabajo, como el ‘Manifiesto sobre internet’ del 2002, de la Federación Internacional de Asociaciones de Bibliotecas (IFLA, por sus siglas en inglés), el cual proclama los principios de la libertad de acceso a la información a través de internet en las bibliotecas, y las directrices IFLA/Unesco sobre internet, 2006, las cuales esta-blecen recomendaciones para desarrollar políticas de actuación y prioridades aplicables a los servicios de internet, de acuerdo con las necesidades de cada comunidad.

Estas orientaciones internacionales más el trabajo de las redes de bibliotecas, de funcionarios públicos y de académicos contribuyeron a que leyes como la 1379 del 2010 (“por la cual se organiza la red nacional de bibliotecas públicas y se dictan otras disposiciones”) establezca como principios fundamentales los derechos de expresión y de acceso a la información. La ley no solo apunta a garantizar el acceso a internet como un servicio básico, sino que establece la alfabetización digital y los servicios de información local como desafíos por afrontar desde las bibliotecas públicas (Con-greso de la República de Colombia, 2010).

Por otra parte, las políticas y acciones estraté-gicas de las redes de bibliotecas relacionadas con el acceso a TIC están trabajando por responder no solo al acceso, uso y manejo de información, sino con programas de educación, circulación y construcción de conocimientos. Así, por ejemplo, en el Plan Estratégico de BibloRed, el acceso a las TIC se entiende como un mecanismo por medio del cual las bibliotecas públicas pueden distribuir y generar capital cultural. Este plan expone que el papel de la biblioteca pública debe estar orientado a la dimensión local-global-local, mientras que las TIC lo estarán a la dimensión global-local-global; así, la biblioteca pública es un puente entre los usua-rios y las TIC (Alcaldía Mayor de Bogotá, 2009).

La red de bibliotecas de Medellín y del Área Metropolitana, por su parte, cuenta con su propio portal (http://www.reddebibliotecas.org.co), que

hace parte de los contenidos digitales locales de Medellín Digital, programa municipal que trabaja por masificar el uso de las TIC, el acceso a los recursos educativos y didácticos para mejorar la calidad de la educación en la región (Red de Biblio-tecas Medellín Área Metropolitana, 2008). En el caso de la Red de Bibliotecas Comunitarias de Cali (http://www.cali.gov.co/redbibliotecas), y como parte de su proceso de cualificación de servicios bibliotecarios y culturales, se ha propuesto facilitar el acceso a la información en todas sus formas, haciendo énfasis en el acceso a TIC. Así, la Red cuenta con 24 bibliotecas conectadas a internet, 7 centros de acceso a internet que corresponden a programas de origen municipal (Infocalis), 1 telecentro y 1 compartel (Alcaldía Municipio de Santiago de Cali, 2011).

¿Qué hacer con los cibercafés?

La situación de los cibercafés es paradójica, aunque las cifras muestran que son mayoritarios en el país: según los datos más recientes del Ministerio de Tecnologías de la Información y las Comunicaciones (2012), hay casi 15.000 cibercafés en el país, unos 2.700 telecentros, incluyendo los gubernamentales —principalmente Compartel— y no gubernamentales, y unas 670 bibliotecas que cuentan con servicios de acceso público a internet). Además, los cibercafés tienden a ser lugares preferidos por los usuarios para acceder a las TIC, de acuerdo con otros estudios internacio-nales, que incluyen a Colombia y otros 24 países ICT (Gomez & Gould, 2010). Sumado a ello, el costo de los cibercafés no parece ser un factor que impida la preferencia por cibercafés por encima de los telecentros o las bibliotecas (Clark & Gomez, 2011). Sin embargo, existe muy poca información sobre los cibercafés en el país, y casi ningún estudio que los incluya sistemáticamente. Además de los estereotipos y prejuicios que hay frente a estos, no han hecho parte de los escenarios y procesos de reflexión, intercambio y construcción de políticas públicas en el país descritos arriba. Hay mucho qué aprender, y mucho qué compartir con las

Page 99: Base de datos EBSCO

Signo y Pensamiento 61 · Agendas | pp 38 - 55 · volumen XXXI · julio - diciembre 2012

50

experiencias de los cibercafés, si las TIC, y las bibliotecas y los telecentros en particular, han de convertirse en espacios exitosos de acceso público.

En el 2008, el DANE publicó un boletín de prensa sobre “Indicadores básicos de tecnologías de la información y la comunicación (TIC). Hogares, Comercio, industria, servicios y microestable-cimientos”, con datos provenientes de la Gran Encuesta Integrada de Hogares (GEIH) de 2007, en el que se plantea que: “los sitios más utilizados por las personas de 5 años y más de edad para acceder a Internet fueron los centros de acceso público con costo (café Internet). El 53,1% de las personas lo usaron durante los últimos 12 meses”.

En el 2009, el DANE público un nuevo boletín de prensa sobre los indicadores básicos de TIC, dedicado solo a uso y penetración de estas en hogares y en personas de 5 años y más. Este documento se basó en dos encuestas: Encuesta de Calidad de Vida (ECV) 2008, que permite obtener información sobre tenencia de TIC y hábitos de consumo, y la GEIH, que con un módulo dedi-cado a las TIC, aplicado de julio a diciembre de 2008, permitió medir uso y penetración de TIC, tener información sobre lugares, frecuencias de uso y actividades realizadas a través de internet. El lugar más usado para acceder a internet fueron, nuevamente, los centros de acceso público con costo (cafés internet): el 47,2% de las personas encuestadas adujeron haberlos utilizado durante los últimos doce meses. Le siguen el hogar, con un 43,8%, y las instituciones educativas, con un 26,6% (estas cifras no son excluyentes, pues algu-nas personas manifestaron haber accedido desde más de un lugar). Los lugares menos visitados son los centros de acceso público gratis, con un 4,1%.

Por otra parte, los resultados de este estudio no solo muestran el crecimiento de este fenómeno en el país, sino la importancia que estos lugares de acceso tienen para diferentes procesos de infor-mación, comunicación y acceso, y construcción de conocimientos. Además, los cibercafés se han convertido en interesantes lugares para la capaci-tación en el uso de las TIC, para el encuentro y socialización (especialmente de niños y jóvenes),

e incluso para la cooptación y formación política. Sin embargo, las políticas del país han expresado no solo desconocimiento de esos procesos, sino una actitud punitiva hacia estas experiencias. Así, por un lado, se persiguen los cibercafés que hacen parte de la economía informal, y, por otro, las pocas alusiones de política que hay sobre estos hacen énfasis en la necesidad de ejercer mayores controles en el acceso que los cibercafés proveen para la seguridad de poblaciones vulnerables, como niños y jóvenes (Ley 1341 de 2009).

El estado de desarrollo y los usos de los ciber-cafés representan un desafío no solo para la acade-mia y las redes que trabajan con el acceso público a TIC, sino también para los desarrolladores de política, pues, como se pudo ver, si la tendencia se mantiene, ha integrado de manera creativa y productiva entidades de los gobiernos locales y nacionales con la academia, las organizaciones sociales y algunos actores del sector privado. En Argentina se realizó un interesante estudio que reveló la posible contribución de los cibercafés para el desarrollo social (Finquelievich & Prince, 2007).

En suma, las experiencias de cibercafés deberían ser incluidas de manera más clara y eficaz, para pensar con ellos en fortalecer no solo espacios de intercambio y trabajo conjunto, sino, también, la implementación de espacios de discusión y construcción de planes y programas que permitan integrarlos en las políticas y planes estatales e institucionales.

Conclusiones: retos de las políticas públicasTIC en Colombia

Hemos presentado un panorama de la historia reciente de las políticas TIC en Colombia y su posible contribución al desarrollo social, el cual hoy forma parte integral de las políticas vigentes. Resaltamos la importancia del papel de las orga-nizaciones de la sociedad civil en la discusión de dichas políticas y en la implementación de pro-gramas para el acceso, uso y apropiación de TIC. En este caso, la producción de políticas no solo evidencia un interesante proceso de deliberación

Page 100: Base de datos EBSCO

51

Luis Fernando Barón / Ricardo Gómez | De la infraestructura a la apropiación social

pública y de participación política por parte de la sociedad civil (Rueda, 2005), sino un sentido de acción social que privilegia el potencial de las TIC para alcanzar derechos a la información y la comunicación, así como el acceso y el intercambio de conocimientos.

Por último, señalamos el importante papel que desempeñan los cibercafés en el acceso público a TIC en el país, y la urgencia de incluirlos tanto en los diálogos y concertaciones como en los planes de implementación de proyectos de conectividad con visión social. En este punto concluimos esta discusión, con cuatro desafíos importantes para las políticas públicas en materia de TIC en Colombia.

Sostenibilidad económica y tecnológica

Además de la inclusión de los cibercafés en los procesos de intercambio y construcción de políticas públicas, es importante señalar algunos vacíos y retos que se pueden observar en el análisis trazado en este artículo. Los primeros y más sobresalientes tienen que ver con la sostenibilidad económica y tecnológica de estas experiencias. Como lo expresó el exministro (E) de TIC Daniel Enrique Medina (2010), en el Encuentro Nacional de TIC del 2010, una de las grandes amenazas de las políticas vigen-tes está en la falta de presupuesto para sostener un proyecto que puede costar alrededor de 350 millones de dólares para seguir con el ritmo de desarrollo que ha tenido en Colombia.

Como se pudo observar antes, la financiación de las políticas en el país recaen, en gran medida, en el Fondo de TIC del nuevo Ministerio TIC, cuyos ingresos, en su mayoría, provienen de la contraprestación de los proveedores de redes y servicios, y por el uso del espectro radioeléctrico. Y aunque se ha venido ampliando para que otros sectores y gobiernos locales participen en este tipo de inversiones (Conpes, 2010), es necesario profun-dizar las redes y alianzas entre las instituciones públicas, el sector privado y las organizaciones sociales. Pues, tal y como lo señala el mismo exministro, de no encontrar financiación a tiempo, se pone en riesgo la expansión de la cobertura

y apropiación de las TIC en Colombia, lo que aumentaría la desigualdad social, económica y regional en el país.

En vista de esto, las oportunidades para establecer alianzas con los cibercafés pueden ser aún más importantes. Si los cibercafés ofrecen el acceso, ¿las iniciativas de internet social promovidas por el Plan Nacional de TIC pueden promover el uso efectivo y la apropiación social de las TIC, en alianza con los cibercafés, o al menos utilizando algunas de las ideas de su funcionamiento como negocio? Este es un asunto que merece más explo-ración y soluciones creativas.

Servicio en áreas rurales

Por otra parte, el cubrimiento y acceso a las TIC en las zonas rurales y semirrurales sigue representando un gran desafío. Aunque la com-posición del país ha cambiado profundamente en las últimas décadas, la población rural sigue representando un porcentaje muy importante en el país: esta representa aproximadamente el 26% de la población colombiana (Dane, 2010). Además, el fortalecimiento de las comunidades rurales no solo contribuiría a la búsqueda de la paz en Colombia, sino a las posibilidades de desarrollo socioeconó-mico y político del país y sus regiones. Y las TIC, tal y como se planteaba desde las políticas del presidente Pastrana (1998-2002), pueden cumplir un papel importante en estos procesos.

Tal y como lo muestran las mismas estadís-ticas de los documentos de política, la mayoría de proveedores y usuarios de internet se localizaban en las cinco principales ciudades del país, dejando por fuera no solo la mayoría de ciudades y distritos (e incluso varias zonas dentro de estos), sino las zonas rurales y semirrurales. Además, tal y como lo señalaba Olga Paz hace unos años, los progra-mas de inclusión social planteados por el gobierno no han logrado llegar a la mayoría de poblaciones rurales y semirrurales. Así ha sucedido, según Paz, con los programas como ‘Compartel’, la ‘Agenda de conectividad’ con el programa de ‘Gobierno en línea’ y ‘Computadores para educar’, además de

Page 101: Base de datos EBSCO

Signo y Pensamiento 61 · Agendas | pp 38 - 55 · volumen XXXI · julio - diciembre 2012

52

las iniciativas de educación virtual que ofrecen el Sena, la Corporación Colombiana de Investigación Agropecuaria (CORPOICA) y la Red de Infor-mación y Comunicación Estratégica del Sector Agropecuario (AGRONET). Por estos motivos, ella propone hacer una revisión y medición a estos programas y políticas que incentivan el uso de internet, para fortalecer y mejorar la aplicación de las políticas públicas (Paz Martínez, 2007).

El acceso, uso y apropiación es más complejo aun si se piensan las características culturales, educativas y sociopolíticas de los individuos y comunidades rurales. Y como se pudo observar, las mismas políticas se han planteado serios desafíos, no solamente en los procesos de regionalización, sino en el conocimiento de los mundos y necesida-des locales, en una perspectiva que permita hacer el tránsito de la apropiación tecnológica a una apropiación social en la que las TIC se convierten en herramientas sociales o económicas para cerrar las brechas sociales (Zambrano, 2009).

Más allá de la modernización:apropiación social

De igual manera, es importante destacar que a pesar de los esfuerzos realizados por organizacio-nes de la sociedad civil (varios de ellos en alianza con organizaciones gubernamentales), las políticas y programas del Estado siguen privilegiando un enfoque desarrollista e instrumental. Este enfoque tiende a ver la difusión y el acceso a las TIC, per se, como alternativas para resolver problemas de gobernabilidad, competitividad, paz o pobreza, sin tener en cuenta, por una parte, los contextos y necesidades de las poblaciones involucradas, y por otra, las características del uso, integración e innovación de esas tecnologías en la vida cotidiana de individuos, organizaciones y comunidades.

Las políticas públicas en Colombia siguen haciendo énfasis en estrategias para garantizar la prestación de servicios, para incrementar las conexiones y la velocidad de internet, para aumentar el número de computadores y de puntos de acceso en instituciones públicas, prestando menor atención

a la calidad y los resultados de los usos que se están dando a esas tecnologías. Los procesos de “alfa-betización tecnológica” y la formación de recurso humano de alto nivel para la investigación y la inno-vación son muy importantes; sin embargo, resultan insuficientes frente a los propósitos de integrar de manera más efectiva el acceso a las TIC con procesos de desarrollo, ciudadanía e integración cultural y política. Por ello, es importante fortalecer, primero, la investigación y la producción de información sobre los resultados de las experiencias de uso, y promover, en segunda instancia, la participación comunitaria en los procesos de toma de decisiones sobre la adopción y desarrollo de programas de TIC.

Etnia, género y generaciónen las políticas TIC

Por otra parte, a pesar del liderazgo y de la par-ticipación de muy diversos sectores sociales, las políticas públicas sobre TIC, y particularmente las relacionadas con el acceso público, evidencian serios vacíos, no solo en la reflexión, sino también en la formulación de planes y programas que tengan en cuenta las diferencias étnicas, regionales, generacionales y de género en el país.

Oswaldo Ospina — coordinador de TIC para la educación y el desarrollo social de la Corporación Colombia Digital— afirma que diversos estudios han mostrado que el acceso y aprovechamiento de TIC difiere según el género y el nivel educativo, de tal forma que las mujeres acceden menos a las TIC, y si tienen un nivel educativo bajo, las probabilidades son más esca-sas aún (Ospina, 2009). Estos datos para Ospina implican, por ejemplo, desarrollar mecanismos de seguimiento al impacto del uso de las TIC y hacer visible cómo inciden o no en la reducción de con-diciones de exclusión e inequidad de las mujeres. En las mediciones de uso de las TIC se deberían incluir indicadores relacionados con la diferencia de géneros, y no solo medir la distribución de usuarios en el país en términos regionales y etarios.

Así mismo, se ve la necesidad de considerar programas y proyectos de acceso y apropiación

Page 102: Base de datos EBSCO

53

Luis Fernando Barón / Ricardo Gómez | De la infraestructura a la apropiación social

dirigidos específicamente a niños y jóvenes, pues ellos no solo son los principales usuarios de los lugares de acceso público, sino, también, quienes mayor predisposición, habilidad y capacidad tienen para manejar los equipos y las herramientas de las TIC (Barón, 1999; Cadena, 1999). Y paradójica-mente, en ocasiones son los niños y los jóvenes los que más dificultades encuentran para acceder a estas tecnologías y a la variedad de información, comunicaciones y conocimientos que ofrecen.

Los indicadores de impacto

Las políticas analizadas cuentan, por lo menos, con dos fuentes oficiales de indicadores sobre el desempeño de las TIC. Por una parte, el Dane recoge información sobre uso de las TIC en los sectores productivo, educativo, Estado y en la comunidad, por medio de encuestas periódicas, como la Encuesta Continua de Hogares (ECH), la GEIH y la ECV. Por otra parte, el Sistema de Información Unificado del Sector de las Telecomunicaciones (SIUST) da cuenta de la infraestructura existente en el país al recoger datos provenientes de los operadores de servicios de líneas telefónicas, celulares e internet existentes en Colombia.

Aunque los datos y estadísticas de las institu-ciones estatales han adoptado las normativas inter-nacionales de la UNCTAD/UN y la CEPAL para facilitar la comparación internacional de cifras, estas han enfrentado problemas de continuidad y en el uso de categorías y variables compartidas

que permitan hacer seguimiento y comparaciones (temporales, espaciales o entre diferentes grupos poblacionales, por ejemplo).

La diversidad, disparidad y falta de continui-dad de la información, y los indicadores sobre el desempeño de las TIC, representan un gran desafío para los sectores involucrados en la construcción y ejecución de políticas públicas. Sin embargo, a pesar de los intercambios intersectoriales destacados en este artículo, la información recogida muestra que no hay diálogo, ni miradas conjuntas a las cifras y las variables producidas por las instituciones gubernamentales, ni por las organizaciones sociales y la academia.

La construcción de miradas integrales y conti-nuas a los datos e información sobre el desempeño de las TIC, y de los centros de acceso en particular, representa un desafío muy importante. Mejor aún, si ese desafío incluye la posibilidad de construir indicadores y análisis conjuntos que consideren los contextos locales y globales que inciden en el desarrollo de programas con TIC. Además, sería importante desarrollar marcos conceptuales y metodológicos para la construcción de indica-dores sobre los impactos sociales, tecnológicos y culturales de los centros de acceso público, pues este sigue representando un gran vacío, no solo en Colombia, sino, también, en muchos países en el mundo (Sey & Fellows, 2009). Esta información no solo contribuiría a conocer mejor las trayectorias de estas experiencias, sino a su fortalecimiento y desarrollo, de acuerdo con los cambios sociales y tecnológicos del mundo contemporáneo.

Referencias

Alcaldía Mayor de Bogotá, Secretaría de Educación del Distrito, Dirección de Ciencia, Tecnología y Medios Educativos. (2009) Bibliored plan estratégico junio 2009-mayo 2011. Recuperado de http://www.biblored.edu.co/files/images/contenido/PLAN%20ESTRATEGICO%20BIBLORED%202009.pdf

Alcaldía Municipio Santiago de Cali (2011) Mapa de conectividad municipio de Santiago de Cali. Recuperado de http://www.cali.gov.co/redbi-bliotecas/publicaciones.php?id=36034

Page 103: Base de datos EBSCO

Signo y Pensamiento 61 · Agendas | pp 38 - 55 · volumen XXXI · julio - diciembre 2012

54

Barón, L. F. (1999). Anticipándose al futuro para atrapar el presente: experiencias de acceso comu-nitario a nuevas tecnologías de comunicación e información en Bogotá. Recuperado de http://uib.colnodo.apc.org/investigaciones.html

Bibliotecas Medellín Área Metropolitana (2008) 011. Club Networking, Medellin. Recuperado de http://www.slideshare.net/Networking.tic/red-de-bibliotecas-de-medelln-presentation

Botero, S., Rojas, A., Cadeac, P., & Escobar, C. (2009). Apropiación de las TIC en la Agenda Pública. Recuperado de http://www.makaia.org/recursos.shtml?apc=h1d-1682-1682-&x=1469

Cadena, S. (1999). Unidades informativas barriales. Reflexiones de un proceso de apropiación tec-nológica. Recuperado de http://reports.idrc.ca/fr/ev-3641-201-1-DO_TOPIC.html

Casasbuenas, J. (2007). Gestión de conocimiento e intercambio de experiencias entre telecentros comunitarios y telecentros Compartel en Colom-bia. Ponencia presentada en el Encuentro Latinoamericano de Telecentros e Inclusión Social 2007, Santiago de Chile.

Clark, M., & Gómez, R. (2011). The negligible role of fees as a barrier to public access computing in developing countries. EJISDC, 46(1), 1-4.

Colnodo, APC, Corporación Colombia Digital, & Observatorio de Sociedad Gobierno y Tecnologías de Información - Observatics. (2007). Resultados consulta sobre Políticas de Tecnologías de Información y Comunicación (TIC) en Colombia. Bogotá: Colnodo.

Colombia, Congreso de la República. Ley 1379 de 2010, Ley de Bibliotecas Públicas (15 ene., 2010).

Colombia, Conpes Documento 3457. (2007). Lineamientos de política para reformular el pro-grama Compartel de telecomunicaciones sociales. Bogotá: Ministerio de Comunicaciones.

Colombia, Conpes Documento 3670. (2010). Lineamientos de política para la continuidad de los programas de acceso y servicio uni-versal a las tecnologías de la información y las comunicaciones. Bogotá: Ministerio de Comunicaciones.

Colombia, DANE. (2010). Censo general del 2005. Recuperado de http://www.dane.gov.co/daneweb_V09/index.php?option=com_content&view=article&id=307&Itemid=124

Colombia, Ministerio de Comunicaciones. (2008). Plan Nacional de Tecnologías de la Información y las Comunicaciones. Plan Nacional de TIC 2008-2019. Todos los colombianos conectados, todos los colombianos informados. Recupe-rado de http://www.colombiaplantic.org.co/medios/docs/PLAN_TIC_COLOMBIA.pdf.

Corporación Colombia Digital. (2005). La sociedad del conocimiento en Colombia y el fortaleci-miento de los procesos comunitarios. Bogotá: CCD.

Finquelievich, S., & Prince, A. (2007). El (invo-luntario) rol social de los cibercafés (Cibercafes’ (involuntary) social role). Buenos Aires, Argentina: Editorial Dunken.

Gomez, R., & Gould, E. (2010). The “cool factor” of public access to ICT: users’ perceptions of trust in libraries, telecentres and cybercafés in developing countries. Information Technology & People, 23(3), 247-264.

Jaillier, É. (2009). Políticas públicas sobre TIC en el marco de la sociedad de la información en Colombia. Una reflexión sobre un tema aún pendiente en la investigación social. Revista Q, 3(6), 25.

Ospina, O. (2009). Aprendamos. Boletín Sema-nal Colombia Digital, 4(10). Recuperado el 01 de http://www.ccdboletin.net/index.php?option=com_content&view=category&layout=blog&id=182&Itemid=61

Paz Martínez, O. (2006). Reporte de políticas TIC en Colombia. Bogotá: Colnodo.

Paz Martínez, O. (2007). Alternativas y desafíos de las TIC en el medio rural: Apuntes con base en el contexto colombiano. 6. Recuperado de http://www.colnodo.apc.org/investigacion.shtml?apc=f-xx-1-&x=179,

Paz Martínez, O. P. (2009). Informe de acción de incidencia regional Colombia: nuevos telecen-tros del programa Compartel. Bogotá: APC, Andina TIC.

Page 104: Base de datos EBSCO

55

Luis Fernando Barón / Ricardo Gómez | De la infraestructura a la apropiación social

Rodríguez, M. (enero-marzo, 2008). El plan nacional de TIC 2008-2019. Revista Sistemas, 104, 14-21.

Rueda, R. (2005). Apropiación social de las tec-nologías de la información: Ciberciudadanías emergentes. Recuperado de http://alainet.org/active/9896

Rueda, R., Rozo, C., & Rojas, D. (2006). Formación de docentes y tecnologías de la información. El caso de las universidades y normales de Bogotá. Documento presentado en el I Congreso Ibe-roamericano de Ciencia, Tecnología, Sociedad e Innovación CTS+I. Mexico DF.

Sey, A., & Fellows, M. (2009). Literature review on the impact of public access to information and communication technologies. Working Paper No. 6. Seattle: Center for Information & Society, University of Washington.

Tamayo, C. A., Delgado, J. D., & Penagos, J. E. (2007). Hacer real lo virtual. Discursos del desarrollo, tecnologías e historia del internet en Colombia. Bogota: Cinep, Colciencias, Universidad Javeriana.

Zambrano, J. (2009). Las políticas públicas en TIC. Una oportunidad de cerrar la brecha social. Revista Q, 4(7), 17.

Page 105: Base de datos EBSCO

Copyright of Signo y Pensamiento is the property of Pontificia Universidad Javeriana and itscontent may not be copied or emailed to multiple sites or posted to a listserv without thecopyright holder's express written permission. However, users may print, download, or emailarticles for individual use.

Page 106: Base de datos EBSCO

A Policy Description and Its Execution Scheduling for Automated IT SystemsManagement

YUTAKA KUDO,1 TOMOHIRO MORIMURA,1 YOSHIMASA MASUOKA,1

and NORIHISA KOMODA21Hitachi, Ltd., Japan

2Osaka University, Japan

SUMMARY

In order to execute multiple policies in a policy-basedautomation system safely and efficiently, we propose anddevelop a concurrency control mechanism for policy-basedautomation systems. This mechanism analyzes the logicalstructure of the target application system to identify the ITresources affected by the policy action, and locks the re-sources to prevent conflicts between multiple policies. Anefficient preemptive scheduling scheme for multiple policyexecutions is also developed. © 2012 Wiley Periodicals,Inc. Electron Comm Jpn, 95(9): 27–35, 2012; Publishedonline in Wiley Online Library (wileyonlinelibrary.com).DOI 10.1002/ecj.11399

Key words: IT operations management; policy-based automation; policy description; concurrency control.

1. Introduction

As information systems increase in scale and com-plexity, there is a need for technologies to save effort inoperation management, such as simplification of systemdevelopment, maintenance of the optimal system state byautonomous configuration changes, and support for recov-ery from failures [1–3]. In information systems, and espe-cially online application systems such as Web-basedthree-tier systems, the number of users is hard to predict,which makes the systems vulnerable to performance deg-radation caused by unexpected access concentration. Inaddition, open-system servers are less reliable and morevulnerable to faults than conventional mainframes. Thus,operation management with due regard for load fluctua-

tions and failure occurrence is necessary for the consistentoperation of information systems.

Highly skilled system administrators and other ex-perts are needed to provide prompt response to load fluc-tuations and system faults. Consequently, policy-basedoperation management is attracting much attention [4, 5].Here experts’ know-how is formally described as policies,and the information technology (IT) resources of an appli-cation system are supervised on the basis of such policies.When symptoms of performance degradation or possiblefailures are detected, load patterns are analyzed and causesare inferred, and appropriate measures are taken, such asthe addition of IT resources or temporary use restrictions.When policy-based operation management is applied, mul-tiple predefined policies may be launched simultaneouslyin some cases. However, if such policies involve configura-tion changes for the same IT resources, this may result inconflicts and lead to consequences other than those in-tended by the policy setter. Therefore, a policy controlsystem that implements policy-based operation manage-ment must decide whether some policy may be executedconcurrently with another policy or should be kept onstandby until the completion of execution.

Furthermore, when such concurrent execution deci-sions are made according to the description of every policy,the addition or removal of policies in an application systemmust be accompanied by correction of constraints on exist-ing policies, which may lead to a dramatic increase in thecomplexity of policy description [6].

In this study, we propose a policy execution schedul-ing method. In particular, when multiple policies can beexecuted simultaneously in an application system, the over-lap of the resources required by each policy is estimated byusing a logical structure tree of the target application sys-tem, and the policies are executed either concurrently or oneafter another.

© 2012 Wiley Periodicals, Inc.

Electronics and Communications in Japan, Vol. 95, No. 9, 2012Translated from Denki Gakkai Ronbunshi, Vol. 131-C, No. 10, October 2011, pp. 1803–1810

Contract grant sponsor: Business Grid Computing Development Projectinstituted by the Ministry of Economy, Trade and Industry in 2003.

27

Page 107: Base de datos EBSCO

Below the basic operation of the policy control sys-tem and scenario examples are described in Section 2;problems of policy execution control are addressed in Sec-tion 3; and a prototype design of a policy control system isexplained in Section 4. Then, in Section 5, we estimate theperformance of policy execution control and the complex-ity of policy description in the proposed policy controlsystem.

2. Policy Control System

2.1 Basic operation of policy control system

A policy control system supervises the conditions ofa target information system according to policy descrip-tions. When predefined conditions are met, configurationchanges and other procedures are applied to the informationsystem so as to maintain its operational state.

A policy description combines firing conditions toexecute an operation, and a procedure for execution. Firingconditions specify a threshold for recognition of an eventissued in case of a system fault or overload; executionprocedure describes the execution flow of a configurationchange or other action. On the arrival of an event, the policycontrol system looks for a policy that specifies the event asa firing condition and automatically executes actions speci-fied by the policy.

2.2 Examples of policy control scenarios

Here we assume the application of policies to aWeb-based three-tier system and explain policy controlscenarios by examples. In the Web-based three-tier systemconsidered here, Web/AP servers are assumed to be con-nected to a load balancer so that scale-out load balancing ispossible; on the other hand, scale-out is impossible for aDB server. The three scenarios considered here are illus-trated in Fig. 1.

(1) Server replacement on fault

The operating conditions of every server in a Web-based three-tier system are monitored and the systemswitches to a backup server in case of a fault.

(2) Dynamic resource addition on overload

When the load is distributed among multiple Web/APservers using a load balancer, performance indicators(throughput, response time, etc.) are monitored on theentire Web layer rather than for individual Web/AP servers.When performance degradation is detected, performance ismaintained by adding an appropriate number of Web/APservers.

(3) Restriction of new logins on overload

When the load is high on both Web/AP servers andthe DB server, scale-out of Web/AP servers results in afurther increase in the load on the DB server, and hence newlogins are restricted to reduce the number of users.

Many other policies can be applied to an applicationsystem, such as backup processing and data deletion ac-cording to server disk space, bandwidth adjustment de-pending on the network load, emergency restart of services,regular server reboot, live migration of virtual servers toother hypervisors depending on the hypervisor load, etc.

3. Approach to Problems of Policy ExecutionControl

3.1 Problems of policy execution control

Existing policy control systems implicitly assumethat only one policy at a time can be applied to an IT

Fig. 1. Use case scenarios for Web 3 tier applicationsystem.

28

Page 108: Base de datos EBSCO

environment (managed object), and researchers have fo-cused on methods of selecting an optimal policy for anevent or a goal [7]. The policy control scenarios consideredin Section 2.2 likewise work efficiently when executedsequentially; however, when multiple policies are executedsimultaneously, the results may be different from intentionsof policy setters, as follows.

Redundant execution of the same policies

In a Web-based three-tier system where user accessesare allocated among Web/AP servers by using a load bal-ancer, the load on the Web/AP servers increases uniformlywith more user accesses. If the CPU load on each Web/APserver is monitored and a policy is applied to add Web/APservers with increases in load, events indicating high loadare reported from multiple Web/AP servers. Thus, as manyWeb/AP servers are added as events are received, eventhough adding a single Web/AP server would suffice toreduce the load on the Web/AP tier.

Competitive operations on same object

When the load on the Web/AP tier increases or de-creases and a policy is applied to add or remove Web/APservers accordingly, without waiting for the completion ofserver addition or removal by a previously enabled policy,the required number of servers cannot be calculated prop-erly.

Policy control systems have been widely studied inthe fields of network management, including QoS (Qualityof Service) control [8], and security policies, includingaccess control. Such systems are now widely used in theoperation management of information systems such asserver load balancing [9], server troubleshooting, and serv-ice level management [10] that maintains a certain level ofperformance, for example, the response time of e-com-merce sites. CIM-SPL (Common Information Model –Simplified Policy Language) by a DMTF (DistributedManagement Task Force) is a well-known policy descrip-tion language. With CIM-SPL, policies can be grouped andnested. The execution of policies in every group is based onone of two strategies, Execute_All_Applicable or Exe-cute_First_Applicable. The former strategy means that allexecutable policies are executed, while the latter strategymeans that only the earliest executable policy is executed.This strategy designation pertains to whether all policies oronly one policy are executed, which is different from thedistinction between concurrent and sequential executiondiscussed in this study.

Other policy description languages, such as PCIM(Policy Core Information Model) [12] and its extensionACPL (Autonomic Computing Policy Language), also can-not define the conditions for concurrent policy execution.

As regards the possibility of simultaneous executionof multiple policies, there have been studies on executionscheduling based on policy execution time and priority[13]. However, these studies do not deal with concurrencycontrol of resources subject to competition among multiplepolicies.

Thus, to the best of our knowledge, no research hasdealt with policy control systems performing concurrencycontrol of multiple policies with regard to resource conten-tion. In this paper, we propose a concurrency controlmechanism as well as a method of saving effort in executionscheduling.

One of the simplest decision-making strategies forconcurrent execution is to provide the constraint part of thedescription for each policy with identifiers of policies thatshould not be executed concurrently. In the example shownin Fig. 2, policies A, B, and C are set for business applica-tion ABC. If policies A and B cannot be executed concur-rently, but concurrent execution is possible for A and C, orB and C, the identifier “Policy B” is specified in the con-straint part of policy A, and the identifier “Policy A” isspecified in the constraint part of policy B. Based on thisinformation, the policy control system can perform execu-tion scheduling.

In such an approach, however, every time a policy Xincompatible with policies A and C is added, the descrip-tions of policies A and B must be modified. The sameapplies to when policies are removed. This not only maylead to mistakes in policy descriptions, but also involvesmuch effort for adding or removing policies. Thus, thisstudy aims to create a method for concurrency control ofmultiple policies with regard to resource contention, and apolicy description format that saves effort in adding orremoving policies.

Fig. 2. Policy constraint adjustment in adding a newpolicy by conventional method.

29

Page 109: Base de datos EBSCO

3.2 Objectives of policy execution control

To solve the above problems, we set the followingobjectives for policy execution control and the efforts re-quired for decision making about concurrent execution.Suppose that a total of 50,000 policies are set for 1000business applications. If events invoking every policy occuronce in 3 days (16,666 times per day = 694 times per hour= 12 times per minute) and the policy execution time is 30minutes on average, then policies are launched 360 timesduring 30 minutes (1800 s). That is, the total time taken bypolicy execution control to make concurrent execution de-cisions upon receiving events and to switch among policiesby priority must be 5 s or less (1800 s/360). We considerthis as a performance objective for policy execution control.This objective does not include action execution time,because action execution is handled by action executionfunctions other than policy execution control. As regardsthe efforts required for policy description, we aim at amethod in which policy description complexity does notincrease exponentially with the number of set policies.

3.3 Policy execution control to solve problems

To solve the problems described in Section 3.1 andto achieve the objectives formulated in Section 3.2, weintroduce a logical structure tree of the application systemas shown in Fig. 3, and propose a method of specifying arange of resources that should not be manipulated concur-rently.

The logical structure tree of an application system isa logical object tree that represents the application systemas a tree by breaking it down into logical components. Theroot node is a logical object representing the entire applica-tion system, and the subordinate objects are components ofsuperior objects; that is, the tree is based on the “has-a”relationship in terms of object-oriented architecture.

The example in Fig. 3 pertains to the logical structureof an online shopping system based on a Web three-tiermodel. In the diagram, the root of the logical structure tree

represents the entire application system (online shoppingsystem) associated with system components, that is, thefront-end layer and back-end layer. In addition, the front-end layer has subordinate objects, a load balancer thatallocates user requests among Web/AP servers, and aWeb/AP layer that processes user requests. The Web/APlayer has a scale-out structure composed of Web/AP serv-ers. Similarly, the back-end layer is composed of a DBserver.

As explained in Section 2.1, policy descriptions com-bine firing conditions to execute an operation and a proce-dure for execution. In addition, the IDs of objects in thelogical structure tree of the application system are specifiedas policy application targets to define the range of modifi-cation caused by the respective actions. Using hierarchicalrelations defined by the logical structure tree of the appli-cation system, resources subject to policies are defined anddecisions about concurrent execution are made.

Specifically, as shown in Fig. 4, the resources forapplying policy A are extracted as the subtree of the front-end layer comprising the load balancer, Web/AP layer, andWeb/AP servers. If the resource range thus extracted is notcurrently used by another policy, then concurrent executionis recognized as possible; otherwise, concurrent executionis considered impossible and exclusive control is applied tomake a policy wait. If policy A dealing with the front-endlayer and policy C dealing with the back-end layer apply toseparate subtrees of the logical tree, then parallel executionis allowed.

When creating the logical structure tree of an appli-cation system, a logical object representing the entire ap-plication system is defined as the root node, and subordinateobjects are defined as components of superior nodes. Policydescriptions must include the IDs of pertinent objects.Additional logical objects are defined as necessary whennew policies are added. In the case of many additionalobjects, the logical structure tree can be extended further bya breakdown of existing objects. For example, a batchserver can be introduced into the back-end layer to add apolicy involving reconfiguration of the batch server that canbe executed simultaneously with policies applied to the DBserver. If concurrent execution is not allowed, the policyinvolving reconfiguration of the batch server is applied tothe entire back-end layer.

Thus, representation of an application system by alogical structure tree offers not only simple description ofpolicies, but also reconfiguration of the logical structuretree and addition/removal of policies without modifying thecontents of existing policies. Thus, the adjustment of exist-ing policies (re-setting of concurrent execution conditions)becomes unnecessary.

However, using logical structure trees of applicationsystems is disadvantageous in terms of concurrent execu-tion efficiency. This is because the range of locked re-Fig. 3. Logical structure tree of application system.

30

Page 110: Base de datos EBSCO

sources expands with increasing proximity of the objectshandled by a policy to the root of the logical structure tree.In this study, we imposed a tree structure constraint on thelogical representation of application systems so as to givepriority to ease in avoiding risks of concurrent execution.In the future, we plan to investigate improvement of con-current execution efficiency by using different repre-sentation formats; however, providing control logic to thedescription parts related to every policy seems appropriatefor greater efficiency of concurrent execution.

3.4 Preemptive execution scheduling

Preemptive execution depending on the importanceand urgency of policies is implemented in addition to theconcurrency control explained above, thus aiming at effi-cient execution of policies requiring prompt response, evenif concurrent execution is impossible.

Specifically, policies are prioritized by the policysetters, and the policies to be executed are ordered by thepolicy control system. That is, if policies A and B can bothbe executed but concurrent execution is impossible becauseof resource contention, the policy with the higher priorityis executed first. In addition, if policy D with higher prioritybecomes executable during the execution of policy C withlower priority, execution override can be used so that policyD is executed immediately, and then execution of policy Cis resumed.

4. Design of Prototype System

4.1 Architecture of prototype system

The component implementing the functions of policyexecution control is called the policy coordinator. The

architecture of a policy control system with a policy coor-dinator is shown in Fig. 5. The policy control systemincludes a policy engine that executes individual policies,and monitoring objects that monitor the application systemsubject to the policies. The policy engine acquires policiesand the logical structure tree of the application system viaan object manager. In addition, the policy coordinator isplaced above the policy engine to control the execution ofindividual policies. The policy coordinator makes decisionsabout concurrent execution of multiple policies using thelogical structure tree of the application system maintainedby the object manager, and schedules the execution ofpolicy combinations according to priority, event occurrencetime, and other parameters.

Figure 5 also shows briefly the processing flow fromthe reception of event notification to policy execution. Asindicated by items 1, 2, and 3 in the diagram, upon every

Fig. 4. Logical structure of application system and policy descriptions specifying the range of concurrent policy execution.

Fig. 5. Architecture of policy-based automation system.

31

Page 111: Base de datos EBSCO

event notification, the policy engine invokes the policycoordinator to make a decision about concurrent executionand to perform execution scheduling.

When making decisions about concurrent execution,the policy coordinator refers each time to the logical struc-ture tree of the application system. Therefore, the logicalstructure tree can be modified during operation of theapplication system, as explained in Section 3.3.

As shown in Fig. 6, a basic cycle of policy executionstarts when the policy engine receives an event notification;then monitoring information is analyzed for necessity ofexecution (action conditions), and the actions specified bythe policy are executed. The actions are executed repeatedlyuntil their effect is verified, or until no further actions exist.Such repeated execution is called an N-fold action loop.Policy execution is terminated when action execution isrecognized as unnecessary, or when there are no moreactions to execute.

4.2 Context switching of policy execution

If concurrent execution is impossible, the policy tobe executed is determined by priority. Execution is sched-uled so that a policy with higher priority overrides a policycurrently being executed that has lower priority. For thepurpose of such override execution, the policy engine per-forms switching between policies. The timing of such pol-icy execution switching is of great importance. If theswitching interval is set long, then policies with high ur-gency or importance are blocked for a long time; on theother hand, if the switching interval is set short, then theswitching overhead increases, impeding efficient policyexecution.

In this study, the switching interval is set so thatpolicies with high urgency or importance are not blockedfor long periods, and the smallest unit for a transaction inpolicy execution is defined “from the analysis or decisionconcerning policy execution by the completion and verifi-cation of one action.” This is illustrated in Fig. 7. Policy Ais a policy with Normal priority including two actions, 1and 2. Policy B has High priority and involves only oneaction. As shown in Fig. 7, the events for policies A and B

occur at t0 and t−1, respectively. Switching is performedafter verification at t2 and t3. As regards policy A kept onstandby, when its execution is resumed at t4, the necessityof action is reconsidered using the latest monitoring infor-mation so as to track changes in the system state that mighthave occurred during the standby period.

5. Evaluation

Here we evaluate the achievement of the objectivesset in Section 3.2. Specifically, the performance of policyexecution control is evaluated by the results measured inthe operation of the prototype policy coordinator. The com-plexity of policy description is evaluated by checkingwhether it increases exponentially with the number of setpolicies.

5.1 Evaluation of policy execution controlperformance

We measure the time required in order to make adecision about concurrent execution upon receiving anevent, and the time required in order to switch policyexecutions by priority.

The object of evaluation is a Web-based three-tieronline shopping system composed of a load balancer, afront-end layer comprising Web/AP servers, and a back-endlayer consisting of a DB server. The evaluation environmentis illustrated in Fig. 8. The multiple servers are connectedvia a network and are managed by a policy control systemvia a management LAN. The policy control system isimplemented on the J2EE platform; the server is providedwith a CPU Intel Pentium 4, 2 GHz and 2 GB of memory.

The Web-based three-tier online shopping system isa typical Web-based application system, which seems quiteappropriate for evaluation. The logical structure tree of theWeb-based three-tier online shopping system is shown inFig. 3. In the evaluated application system, the front-endlayer and back-end layer are connected under the onlineshopping system object. The front-end layer includes twoobjects: a load balancer that allocates user requests among

Fig. 6. Action cycle of activated policy.

Fig. 7. Context switching between policy executions.

32

Page 112: Base de datos EBSCO

Web/AP servers, and a Web/AP layer that combines a Weblayer receiving user requests with AP layer executing serv-ice logic. The back-end layer includes a DB object thatmanages data. In addition, Web/AP objects with a scale-outstructure are subordinate to the Web/AP layer.

Two policies with different priorities are set on thisapplication system as shown in Table 1. In this system, theexecution of policy A, with multiple actions, is interruptedby higher-priority policy B as shown in Fig. 7. We measuredthe time required to make a decision about concurrentexecution upon receiving an event, and time required toswitch policy executions by priority.

The measured time required in order to make a deci-sion about concurrent execution is given in Table 2. Thetime required to start action upon receiving an event and

making a decision about concurrent execution [(a) in Fig.7] was 230 ms; the time required in order to transit to thewait state after judgment of the impossibility of concurrentexecution [(b) in Fig. 7] was 371 ms. The time required todetermine the possibility of concurrent execution was 10and 16 ms, respectively.

The measured time required for switching betweenpolicy executions by priority is given in Table 3. The timerequired to switch from policy A to policy B [(c) in Fig. 7]was 220 ms, and the time required to switch from policy Bto policy A [(d) in Fig. 7] was 365 ms.

The total time required to make a decision aboutconcurrent execution upon receiving an event and to switchpolicy executions by priority was within 1 s; thus, theobjective set in Section 3.2 was achieved.

5.2 Evaluation of policy descriptioncomplexity

Here we estimate the complexity of policy descrip-tion, including the adjustment of concurrent execution be-tween existing and additional policies. We compare theconventional and proposed methods in terms of policydescription complexity, while varying the total number ofpolicies set on the application system from 0 to 50 inincrements of 10.

The results of the comparison are given in Fig. 9. Inthe conventional method, a decision about concurrent exe-cution must be made for every two of N policies; that is,C(n, 2) combinations must be considered. Denoting by αthe initial description complexity of every policy, the total

Table 2. Overhead of concurrency control.

Table 1. Applied policies

Table 3. Overhead of context switching of policyexecutions

Fig. 8. Structure of evaluation environment. [Color figurecan be viewed in the online issue, which is available at

wileyonlinelibrary.com.]

33

Page 113: Base de datos EBSCO

complexity for the description of n policies can be ex-pressed as nα + n(n – 1)/2.

On the other hand, in the proposed method, someinitial effort (β) is required in order to create the logicalstructure tree of the application system; after that, however,modification of existing policies is not needed, and onlyadditional policies must be described. Therefore, regardlessof the number of established policies, the total complexityof describing n policies is nα + β; that is, the complexitydoes not grow exponentially with the number of policies.Further, as indicated by Fig. 9, when the number of policiesset on an application system exceeds 15, the total complex-ity of policy description, including the description of logi-cal structure, becomes lower than under the conventionalmethod.

6. Conclusions

We have considered policy description using logicalstructure trees of application systems to specify the condi-tions for concurrent execution in policy control. Using theproposed policy description method and policy executioncontrol based on this method, we implemented stable andefficient policy execution scheduling at low complexity ofpolicy description.

Acknowledgment

The present study was part of the three-year BusinessGrid Computing Development Project instituted by theMinistry of Economy, Trade and Industry in 2003.

REFERENCES

1. Japan Users Association of Information Systems(JUAS). Report of Study Group on Critical Infra-

structure Information System Reliabili ty.http://sec.ipa.go.jp/reports/20090409.html (2009).(in Japanese)

2. Ichishima Y. System faults. Nikkei Computer, Feb.15, 2009, p 36–51. (in Japanese)

3. Ichishima Y, Kohara S. Fault!: Differences in damagespread and recovery. Nikkei Computer, Oct. 29,2007, p 44–57. (in Japanese)

4. Sato Y, Nagata M, Tanimura T, Fuketa Y, Masuoka Y.System management solution for policy-based auto-matic operation. Hitachi Hyoron 2005;87:45–50. (inJapanese)

5. Miyagawa S, Saji N, Kudo Y, Tazaki H. Technologyfocus of business grid middleware. IPSJ Mag2006;47:953–961. (in Japanese)

6. Kudo Y et al. Policy execution scheduling for dy-namic configuration change of application system.IEEJ-Information Systems, IS-10-74, p 5–10, 2010.(in Japanese)

7. Chan H, Kwok T. A policy-based management sys-tem with automatic policy selection and creationcapabilities by using a singular value decompositiontechnique. Proc of the 7th IEEE International Work-shop on Policies for Distributed Systems and Net-works (POLICY’06), p 96–99.

8. Nakano K. IP QoS and policy based management tothe Reliable Internet (IP Network Revolution). IPSJJ 1999;40:1004–1006. (in Japanese)

9. Matsubara M et al. Dynamic load balancing in HPCapplications for autonomic computing. IPSJ J2003;44[SIG11 (ACS3)]:89–100. (in Japanese)

10. Kandogan E, Campbell CS, Khooshabeh P, Bailey J,Maglio PP. Policy-based management of an E-com-merce business simulation: An experimental study.Proc of the Third International Conference on Auto-nomic Computing (ICAC’06), p 33–42.

11. DMTF. CIM Simplified Policy Language (CIM-SPL) 1.0.0. http:/ /www.dmtf.org/si tes/de-fault/files/standards/documents/DSP0231_1.0. 0.pdf(2009).

12. Moore B, Ellesson E, Strassner J, Westerinen A.Policy core information model (PCIM) version 1specification, Request for Comment 3060, NetworkWorking Group, 2001.

13. Lotlikar RRM, Vatsavai RR, Mohania M, Chak-ravarthy S. Policy schedule advisor for performancemanagement. Proc of the 2nd International Confer-ence on Autonomic Computing (ICAC’05), p 183–192.

14. Information Technology Promotion Agency. Busi-ness Grid Computing Development Project.http://www.ipa.go.jp/software/bgrid (2005). (inJapanese)

Fig. 9. Total man-hours (conventional versus proposed).[Color figure can be viewed in the online issue, which is

available at wileyonlinelibrary.com.]

34

Page 114: Base de datos EBSCO

AUTHORS (from left to right)

Yutaka Kudo (member) completed the M.E. program in administrative engineering at the School of Science andTechnology, Keio University, in 1995 and joined the System Development Laboratory of Hitachi, Ltd. He is now a seniorresearcher at the Yokohama Research Laboratory. His research interests are knowledge management technologies and IToperations management, specifically, automation of data center operation. He is a member of IPSJ.

Tomohiro Morimura (nonmember) completed the doctoral program in open and environmental systems at the GraduateSchool of Science and Technology, Keio University, in 2003, and joined the Central Research Laboratory, Hitachi, Ltd.,subsequently transferring to the Systems Development Laboratory. He is now affiliated with the Software Division. His researchinterest is IT system operation management, specifically, fault analysis. He holds a D.Eng. degree, and is a member of IPSJ andIEICE.

Yoshimasa Masuoka (nonmember) completed the M.E. program at the School of Engineering, University of Tokyo, in1993 and joined the Central Research Laboratory of Hitachi, Ltd., subsequently moving to the Systems Development Laboratory.He is now a department head at the Yokohama Research Laboratory. His research interests are distributed systems, specifically,middleware for corporate information systems. His current research topic is systems operation management. He is a memberof IEEE.

Norihisa Komoda (fellow) completed the M.E. program in electrical engineering at the Graduate School of Engineering,Osaka University, in 1974, and joined the Systems Development Laboratory of Hitachi, Ltd. He was appointed an associateprofessor at Osaka University (Faculty of Engineering) in 1991, a professor in 1992, and has been a professor of multimediaengineering in the Graduate School of Information Science and Technology since 2002. He holds a D.Eng. degree, and is amember of IEEE and other societies.

35

Page 115: Base de datos EBSCO

Copyright of Electronics & Communications in Japan is the property of John Wiley & Sons, Inc. and its content

may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express

written permission. However, users may print, download, or email articles for individual use.

Page 116: Base de datos EBSCO

47

EN LA MISIÓN DE ESCOGER UN SISTEMA DE INFORMACIÓN DE RECURSOS HUMANOS (SIRH) EFECTIVO

– EVALUANDO SU ROL Y FACTORES CLAVES DE ÉXITO

ON THE QUEST OF CHOOSING AN EFFECTIVE HR INFORMATION SYSTEM (HRIS) – ASSESSING ITS ROLE AND

KEY SUCCESS FACTORS

SERGIO LÓPEZ BOHLEAcadémico Jornada Completa, Departamento de Administración, Universidad de Santiago de Chile. Actualmente cursando el programa de Doctorado en Psicología, Pontificia Universidad Católica de Chile

SEBASTIÁN UGARTE GÓMEZAcadémico Jornada Completa, Facultad de Economía y Negocios, Universidad de ChileActualmente cursando el programa de Master in Human Resource Management and Industrial Relations, University of Manchester

RESUMEN Un Sistema de Información de Recursos Humanos (SIRH) efectivo es crítico en las organizaciones de hoy, de modo de lidiar con una serie de asuntos tales como mayores demandas organizacionales, un mayor uso y necesidad de información, presiones continuas de reducción de costos, así como hacer de RRHH un socio más estratégico del negocio. Este artículo resalta las principales razones consideradas por las organizaciones al momento de introducir un SIRH; además de elaborar un marco de dos tipos de dimensiones para evaluar la efectividad del sistema (cualitativa y cuantitativa), y dos facilitadores claves para sustentar su éxito (afinidad organizacional y arquitectura del sistema). También discute los principales desafíos y problemas enfrentados por organizaciones al momento de implementar un SIRH.

PALABRAS CLAVES: Gestión de información, Efectividad organizacional, Recursos Humanos, Sistemas de información.

ABSTRACT

An effective HR information system is critical in today’s organizations, in order to cope with a number of issues such as increasing organizational demands, a more extensive use and need of information, continuous pressures to reduce costs, as well as making HR a more strategic business partner. This paper highlights the main reasons that organizations consider when introducing an HRIS; in addition to develop a framework of two types of dimensions for assessing the effectiveness of the system (qualitative and quantitative), and two key enablers to sustain its success (organizational suitability and system architecture). It also discusses additional issues and challenges faced by organizations when implementing an HRIS.

KEY WORDS: Information management, Organizational effectiveness, Human resources, Information systems.

Page 117: Base de datos EBSCO

48

I. INTRODUCTION A Human Resource Information System (HRIS) has become a key enabler to increase organizational performance and effectiveness. An HRIS can be defined according to Tannenbaum as “a system used to acquire, store, manipulate, analyze, retrieve, and distribute pertinent information about an organization’s human resources” (Haines and Petit, 1997, p. 261). The functionality and purpose of an HRIS has become more complete and complex in the last years, in response to greater organizational demands, as well as more advanced IT solutions. Initially, the system was meant in personnel management to support transactional processes, as well as maintaining control over operations. Now-a-days, technology has enabled to use more sophisticated applications, with the purpose to improve the decision making process and support global competitiveness. As a result, the human resources professional is expected to be liberated from transactional work, in order to develop a service orientation focus and participate in more strategic and organizational matters (Haines and Petit, 1997). The literature suggests that the role and contribution of an HRIS depends on what motivates to the HR function to introduce a new information system (IS): operational, rational or transformational drivers (Torrington et al., 2008). As every organization has different purposes, business context, organizational culture, resources, among others, the HRIS effectiveness and usage depends on the kind of criteria considered important for such organization. The objective of the paper is to assess the role and key success factors of an effective HRIS for all members of an organization. To achieve this objective, the authors have undertaken a literature review of key articles on the subject matter to answer the following research questions:

1. What is the role and purpose of an HRIS? 2. What kind of criteria should be considered for assessing the effectiveness of the system?

To answer these questions, the paper will be structured in three parts. Firstly, it will describe the role of an HRIS, it will outline the driving forces of organizations to introduce an IS, and will explain how information can be used to increase HR capability and manage HR practices. Secondly, it will discuss about the two major criteria (qualitative and quantitative) to assess the effectiveness of the HRIS, as well as the two key enablers to achieve effectiveness (organizational and system architecture). Lastly, it will make a critical analysis of the existing issues and challenges that organisations confront, when implementing and operating an HRIS.

II. THE PURPOSES OF INTRODUCING AND DEVELOPING AN HRIS.

Why organizations want to implement an HRIS? What is its role and contribution? The reasons are multiple and may depend on strategic, as well as practical reasons. In general terms, an HRIS is a response to achieve cost effectiveness, reduce administrative workload, standardize HR processes or simply add strategic value in the decision making process of the organization.

There is consensus among scholars and practitioners to consider an HRIS as a powerful tool to enhance the HR capability of an organization. There are three main drivers, according to the capability model of Reddington, to aim that objective: (1) Operational, i.e. cost effectiveness is intended by reducing the headcount and the cost of the services; (2) Rational, i.e. improve the services to managers and employees, who are increasingly demanding; and (3) Transformational, i.e. focus on the critical strategic drivers of the organization (Shrivastava and Shaw, 2003; Torrington et al., 2008). The implementation of an HRIS enables the automation of processes, which addresses the operational driver. Alternatively, the inclusion of new applications to the system permits to tackle the rational dimension. However, to progress in the transformational driver, it is essential to develop an information culture, as an objective to make a substantial contribution to the decision making process (Claver et al., 2001).

Page 118: Base de datos EBSCO

49

Many organizations seek to reduce the burden and layers of administration through reengineering its processes using technology. In this sense, there are several examples of corporations such as Hewlett Packard, Campbell Soup, and IBM, which have reduced the human resource headcount and used more effectively the information. These organizations were able to embrace the notion of value added information to make decisions and automation as means to reengineering its HRM processes (Kovach and Cathcart, 1999). When investigating the reasons for introducing an HRIS, a survey conducted to 33 firms revealed that 79% of them recognised that cost savings or operational reasons were the main driving forces to change. The firms primarily expected that the automation would facilitate the standardization of their HR processes and decrease the number of HR professionals in the organization. Similarly, Torrington et al. (2008) showed that the most popular reasons for introducing an HRIS are: quality improvement (91%), speed (81%) and flexibility of information (59%), reducing the administrative workload of the HR area (83%) and improving services to employees (56%). In the same way, additional research has discovered that the system adoption is highly determined by the HR strategy followed in the specific organization. For example, where the strategy is to reduce cost, a transactional IT system approach resulted on more simple HR administration (Ball, 2001). The informating functionality of the IS has been widely used by companies in all sectors. The uses of Intranet or Internet portals are means to propagate information to employees and the external world (Haines and Petit, 1997; Kovach and Cathcart, 1999). Many authors have agreed that the ultimate purpose of the HRIS is strategic. Firstly, due to the quality and value of the information provided to managers and HR staff for decision making purposes; and secondly, to enable HR executives to concentrate on more strategic HR activities such as facilitating organizational transformation, supporting in knowledge management, and facilitating a learning environment (Kovach et al., 2002; Shrivastava and Shaw, 2003). A study conducted by Lawler and Mohrman (2003) discovered a relationship between the use of HRIS and the level to which HR performs a strategic partner role; where HR has a greater probability to become a true business partner in the strategy process, when an integrated HRIS is in place. However, a fully integrated HRIS does not assure that HR will automatically become a strategic partner. On the other hand, the purpose of having an integrated HRIS accessible to the whole organization, as well as to support strategic HR matters, is the increased tendency of devolution of HR practices to the line (Ball, 2001), where the HR area has become more a custodian and controller of such practices, and line managers the executors. Another important consideration of why big multinationals are grasping the concept of a fully integrated HRIS is as a previous stage for implementing outsourcing or shared services initiatives. It is the case of the multinational oil company Shell, which defined in 2002 to have “increased efficiency and effectiveness of HR systems and processes”, as one of the four global HR priorities. The implementation of a shared services sourcing and delivery model was considered to be the final outcome of this strategy; however, two key activities areas needed to be implemented previously, to assure an effective and successful delivery of the model: first, simplify, standardise and benchmark global people processes, in order to have a common and stable ground for the approximately 100 country operations worldwide; second, leverage and fully embed in the organization the HRIS operation globally. In practical terms and according to the Chartered Institute of Personnel and Development (CIPD) from the United Kingdom, technology has been used in organizations in five broad HR areas: People development and performance management, resource management (e.g. recruitment, selection, HR planning), employee relations and communications, HR information and accounting, and retention and reward (Torrington et al., 2008). The applications available seek to add value in using the information for decision making rather than a merely source of data collection and storage (Kovach et al., 2002). In that sense, multinationals such as Hewlett Packard, IBM, and Campbell Soup utilize the functionalities of HRIS to improve the coordination of HR initiatives, support cross-national learning programmes, identify talent worldwide, as well as to follow up and manage the quantity and quality of the cross-national workforce (Kovach and Cathcart, 1999).

Page 119: Base de datos EBSCO

50

III. KEY CRITERIA TO ASSESS THE EFFECTIVENESS OF AN HRIS.

Organizations are driven by different forces when implementing their IT management systems. From the writers’ point of view, qualitative and quantitative parameters are the two major criteria to assess the effectiveness of an HRIS. Furthermore, to support the success of the system, there are two key enablers: organizational suitability and system architecture. On top of that, there are other factors such as availability of resources, HR leadership, organizational maturity, process oriented approach, among others that influence the effectiveness of an HRIS. According to Shrivastava and Shaw (2003), companies are more likely to meet the full potential of technology when IT programmes are undertaken with an orientation to allow the HR area to focus on more value-added initiatives. At the same time, Kovach et al. (2002) highlighted that features such as scalability, set-up, functionality, compatibility, cost, and security deserve consideration when assessing an effective HRIS.

This section will develop and explain the main elements of the qualitative and quantitative parameters, in addition to the two critical enablers.

Qualitative parameters Ideally, an assessment of HRIS effectiveness would be through a financial profit analysis or return on investment. However, due to the constraints to measure financially the impact of an IS, other measures of effectiveness are used such as user satisfaction, based in attitudes and beliefs, and system usage (Haines and Petit, 1997). In a research developed by the mentioned authors, they found that three items explained 47% of the variance in user satisfaction of the IS: (a) The HRIS is flexible to interact with, (b) the HRIS is useful to perform the employees’ job, and (c) the use of the HRIS increases the employees’ productivity. Nevertheless, the study showed that a higher rate of user satisfaction does not necessarily correlate to more use of systems. What’s more, when users perceive that the system is easy to learn and easy to use are expected to utilize the system more (Fisher and Howell, 2004). Additionally, other studies have found a positive correlation between easy to use and user satisfaction (Haines and Petit, 1997). It has been observed that when new IT systems involve a great deal of mental effort to learn, on top of the employee’s daily workload, it creates unintended reactions or generates a negative perception. Fisher and Howell (2001) described an example of a firm who designed an on-line performance appraisal system that required the involvement of several parties at various stages of the process. The complexity of the functionality was perceived as excessively tedious, which generated negative perceptions of the system, and consequently caused a negative impact in the image of the HR function. Therefore, it is critically important to consider how users respond to IS developments when designing such functionalities. Another important aspect is the perception of usefulness of the system, i.e. the quality of the information that it produces and how it increases productivity as well as job’s effectiveness. Haines and Petit (1997) found a strong positive correlation of this feature with user satisfaction. To put it more simply, the ability that the system has to transform the input (data) into a valuable and quality output (information), in order to provide management with a robust source to make decisions. Finally, the degree in which the HRIS reflects alignment to the organizational strategy is an important way to measure the effectiveness in qualitative terms. Fisher and Howell (2004) mentioned that people will be more likely to rely on the HR practices of the IS if it reflects the corporate values of the firm, as well as acting as a enabler to achieve the organization’s goals.

Quantitative Parameters This is probably the most objective way to assess the effectiveness of an HRIS. However, the outcomes of the performance indicators have to be analysed with precaution, as it can be the case

Page 120: Base de datos EBSCO

51

that demanding standards of control in a firm, force people to use a system, which will not necessarily reflect the level of user satisfaction or qualitative features previously described. The automation of processes and the simplification of HR transactional activities generate an administrative advantage which can be measured as a reduction in time of the HR staff to achieve the expected results. Furthermore, the HRIS in the last decade are moving to a further stage: employee self-service. This approach allows employees to have direct and on-line access to their records, which reduces even more the administrative burden of the HR staff (Kovach et al., 2002). The most significant parameter to measure the success of an HRIS from the organizations perspective, as different surveys have revealed (Kovach and Cathcart, 1999; Torrington et al., 2008), is cost savings. Organizations like Merk & Co. estimated a cost reduction of 86% per HR transaction when it is performed by the employee instead of an HR professional. In this organization, after the implementation of the employee self-service functionality, transaction costs were estimated to be $2.3 when performed by the worker, compared to $17 when it used to be performed by the HR staff. Additionally, the new system generated unexpected benefits such as data quality, as employees had to enter their own data (Kovach et al., 2002). Lastly, another preferred way to measure the effectiveness of the system is through system usage, overall or per functionality, as it is possible to track down the employees’ use of the system. Haines and Petit (1997) found that individual/task characteristics such as age, gender, and education, have some degree of influence on system usage. This metric allows following up the level of compliance from the staff to several HR processes such as time and attendance, competence assessment, and performance appraisal.

Organizational suitability The organization is a key enabler to ensure the effectiveness of the HRIS. There are a number of requirements and preparations that if attended, will increase the likelihood of success of the system. One of these is the acceptance of the system by the organizational culture. Claver et al. (2001) concluded that when the information system is in alignment with the firm’s culture, it causes a number of positive consequences such as increasing the level of satisfaction of the employees; given that it facilitates internal integration and environmental adaptation, thus reducing the level of uncertainty and anxiety created by the new IT system. On top of this, corporate members will have a better predisposition towards the IS, given that a system is barely controlled only by formal measures, rather cultural rules. Lastly, it will be considered as a reliable source and mean of communication within and outside the firm. The availability of internal user support is expected to be a key success factor for HRIS effectiveness. A study conducted by Haines and Petit (1997) determined that “the availability of internal support with the presence of a specialized HRIS department or unit had the strongest influence on user satisfaction and system usage” (p. 268). In addition to a specialized IS unit, users who are supported by top management and immediate supervisor for using the system are expected to have a higher level of satisfaction and use the IS more frequently. In the same way that employee involvement and participation (EIP) impacts positively every new initiative that is carried out in a firm, it has an important influence in the development of a new HRIS. EIP has to start in the early stages of the project planning of the HRIS, in order to contribute to a successful implementation (Kovach et al., 2002). This kind of approach will help to ease issues to be experienced later in the implementation stage (Fisher and Howell, 2004). Researchers have also found that there is a positive correlation between EIP in the HRIS development and implementation process, and user satisfaction (Haines and Petit, 1997; Fisher and Howell, 2004). The benefits of EIP contribute to a stronger feeling of ownership, as well as a better fit between user needs and expectations, and system design.

Page 121: Base de datos EBSCO

52

Lastly, appropriate training is a key organizational enabler to use the IS in its full potential. Research has found that there is appositive correlation between training supply and higher levels of satisfaction and use (Haines and Petit, 1997; Fisher and Howell, 2004). Basically, users feel more confidence of using the system when their IT capability is stronger.

System architecture Haines and Petit (1997) found in their study that system conditions such as documentation, accessibility, the presence of on-line applications, and the number of human resource management applications were highly significant features of success. The results of their work showed that the design of the IS and its characteristics are related to increased user satisfaction. It has been observed that “in-house” applications developments results in a better fit between user needs and expectations, and the system customized to address those needs. Additionally, as technology becomes a part of people’s life, an innovative system is likely to increase user satisfaction (Haines and Petit, 1997; Fisher and Howell, 2004). However, innovation and higher number of user applications must not be confused with complexity, as the latter was associated with lower levels of user satisfaction and likelihood of system failure (Fisher and Howell, 2004). The other critical factor that HR units have to face when implementing an HRIS is whether it will be an “off-the-shelf ” vendor solution, based on best practices, or a customized process driven approach (Shrivastava and Shaw, 2003). The first provides the opportunity to redefine and reassess HR processes, to align them to the best HR practices of the market; as the cited authors pointed: “Applying technology to a bad process often results in a bad process that works faster” (2003, p. 207). Alternatively, the second option has the benefit of not having to deal extensively with the difficulty of changing the way people do things, in other words, to deal with the resistance to change. As a matter of fact, there is a consensus that changing HR processes should be driven by business strategy, rather than technological reasons (Shrivastava and Shaw, 2003). Therefore, organizational priorities and drivers must be taken into consideration before choosing the most suitable option.

IV. ISSUES AND CHALLENGES FOR ORGANIZATIONS

Usually, the challenges that organizations have to cope with when implementing an HRIS are not only in the planning and implementation stage, also after the system is in place. These challenges are related with meeting employees’ expectations, loss of personal interaction between HR and the people of the organization, the development of an informational culture, and to elaborate an effective change management approach.

One of the major challenges that HR and organizations face when implementing HRIS solutions is to meet the promises made to users of the multiple advantages of the system. People expectations are high, when they are told that their lives will change due to an increased productivity and efficiency, better use and sharing of information, more effective distribution of work tasks, improved service, among others. However, there are estimations “that nearly half of all new technology implemented in organizations fails” (Fisher and Howell, 2004, p. 243). Unfortunately, such situations bring negative consequences to organizations from the HR point of view, as future change initiatives will lose credibility in the organization. Therefore, to avoid harming the organization, it is better to commit to what can “really” be delivered, and once committed, make sure to “deliver the promise”. Another major concern of implementing an integrated HRIS is the loss of “personal touch” in the interaction with employees of HR related matters. In the writers’ opinion, is undoubtedly that the trade off to automation and employee self-service portals brings the depersonalization of transactions that used to be managed more directly between two or more parties. Nevertheless, personal touch continues to be important in today’s organizations, as it is one of the most effective ways to build trust

Page 122: Base de datos EBSCO

53

and consequently lead people. For that reason, HR needs to find other ways to compensate this gap, by continue adding “personal” value to the organization not anymore as an operational expert, now as a truly business partner and change agent. Many scholars agree that in most organizations the potential of the HRIS is not fully utilized. Moreover, in many cases firms achieve automation of existing HR processes, but fail to progress to a more advanced stage of an informating culture (Torrington et al., 2008). Claver et al. (2001) identified two organizational positions towards IT: a first, more simple, where IT is important to a firm and it is used to improve operational effectiveness (informatic culture); a second, more sophisticated, which visualizes IT as a foundational enabler to make correct decisions through an HRIS (informational culture). To clarify the theory, let’s use a generalization as an example: Western corporations design the IS as effectively as possible from a technical perspective, and then, employees are persuaded or “forced” to get used to it (informatic culture); conversely, Japanese firms design the IS to benefit and profit from the knowledge that employees already have (informational culture). The latter approach has the advantage of a more meaningful and useful HRIS that has relevant information for decision making process and is accepted and valued by the organizational culture. Most probably, to put in place an effective change management approach to a new IS development, is one of the most challenging processes that HR has to cope with. It is known that cultural change needs plenty of time for the new shared beliefs to be fully embedded in the organization; unfortunately, organizations implement their IS in a short period of time (Claver et al., 2001). In addition to cultural change, new technology means a significant change in the organization that can impact greatly the way that the work is done, job roles, and performance (Fisher and Howell, 2004). These are the reasons of why organizations must start as early as possible preparing the change management process, where positive reactions can be facilitated, as well as to eliminate the blockers and minimize the negative reactions. A winning approach to achieve user acceptance would be to engage staff by communicating “honestly” how easy to use and how supportive to get their jobs done will be the new IS; however, do not forget to meet employees’ expectations. On the other hand, as the level of people’s IT competence differs, the approach to “sell in” the system must vary depending on the likelihood to resist that the target population has towards the new technology. Similarly, management must be treated with special consideration, as there is agreement among scholars that many IT projects fail, due to the inability of managers to manage change (Shrivastava and Shaw, 2003); especially if HR needs the support of managers to act as change agents. To summarize, the literature identifies a number of enablers that will increase the likelihood of a successful IT implementation such as extensive, honest and early communication, aim quick wins, fragment the system into partial deliverables, use trusted employees as change champions, run pilot tests to iterate improvements, and ensure proper employee involvement and participation since the beginning.

V. CONCLUSIONS

The introduction of an information system into an organization is not an easy endeavour, given that a great deal of technical and behavioural factors must be taken into consideration to assure a successful implementation. This paper has intended to clarify the purpose of an HRIS to an organization, in addition to discuss about the key criteria to assess the effectiveness of the system.

Organizations are influenced by different driving forces to implement an HRIS: operational, rational and transformational; being cost savings or operational drivers the main reasons for introducing an HRIS. Alternatively, HR aims to become a strategic partner to the organization by developing advanced HRIS; however, an information system is a contributor to reduce transactional cost and the size of HR, not necessarily a guarantor of strategic partnership (Lawler and Mohrman, 2003).

When assessing the effectiveness of an HRIS, qualitative and quantitative parameters are the two

Page 123: Base de datos EBSCO

54

major criteria to measure. The main elements of the qualitative dimension are user satisfaction, which reflect attitudes and beliefs to the IS; easy to use and usefulness, which are positively correlated to user satisfaction; and alignment of the IS to the organizational strategy. On the other hand, the main factors within the qualitative dimension are reduction in time of HR administrative processes, cost savings and system usage. Additionally, there are two key enablers that support the success of the system. The first is organizational suitability, i.e. the users’ acceptance of the system, the existence of internal user support, active employee involvement and participation, and appropriate training. The second facilitator is system architecture, i.e. the approach taken to develop or acquire the most suitable HRIS to the organization.

Organizations face several challenges to make of an HRIS a key enabler, in order to become a high performance organization. One of the challenges is how to progress from an “informatic” to an “informating” culture, which would increase the likelihood to improve the quality of information and use it as a competitive advantage to make better decisions and to achieve organizational goals. Another challenge is how to manage a reliable and effective change process, to overcome the natural resistance to change that individuals and organizations show to the “threats” of new technology. In this matter, act promptly, seek the appropriate leadership champions, deliver the promise, and work on maximizing the qualitative and quantitative parameters, as well as the key enablers mentioned previously, are a constructive way to achieve HR excellence in information systems.

REFERENCES

Ball, K. (2001) ‘The Use of Human Resource Information Systems: A Survey’, Personnel Review, 30, 5, 677-93.

Claver, E., Llopis, J., Reyes, M. and Gasco, J.L. (2001) ‘The Performance of Information Systems through Organisational Culture’, Information Technology and People, 14, 3, 247-260.

Fisher, S. and Howell, A. (2004) ‘Beyond User Acceptance: An Examination of Employee Reactions to Information Technology Systems’, Human Resource Management, 43, 2&3, 243-58.

Haines, V. and Petit, A. (1997) ‘Conditions for Successful Human Resource Information Systems’, Human Resource Management, 36, 2, 261-75.

Kovach, K. and Cathcart, C. (1999) ‘Human Resource Information Systems (HRIS): Providing Business with Rapid Data Access, Information Exchange and Strategic Advantage’, Public Personnel Management, 28, 2, 275-82.

Kovach, K., Hughes, A., Fagan, P. and Maggitti, P. (2002) ‘Administrative and Strategic Advantages of HRIS’, Employment Relations Today, 29, 2, 43-8.

Lawler, E. and Mohrman, S. (2003) ‘HR as a Strategic Partner: What Does It Take to Make It Happen?’ Human Resource Planning, 26, 3, 15-29.

Shrivastava, S. and Shaw, J. (2003) ‘Liberating HR through Technology’, Human Resource Management, 42, 3, 201-22.

Torrington, D., Hall, L. and Taylor, S. (2002) Human Resource Management, 7th edition, London: Financial Times Prentice Hall, Chapter 33.

Page 124: Base de datos EBSCO

Copyright of Horizontes Empresariales is the property of Departamento de Economia y Finanzas, Universidad

del Bio-Bio and its content may not be copied or emailed to multiple sites or posted to a listserv without the

copyright holder's express written permission. However, users may print, download, or email articles for

individual use.