881 resultados para DDM Data Distribution Management testbed benchmark design implementation instance generator
Resumo:
Die vorliegende Arbeit entstand während meiner Zeit als wissenschaftlicher Mitarbeiter im Fachgebiet Technische Informatik an der Universität Kassel. Im Rahmen dieser Arbeit werden der Entwurf und die Implementierung eines Cluster-basierten verteilten Szenengraphen gezeigt. Bei der Implementierung des verteilten Szenengraphen wurde von der Entwicklung eines eigenen Szenengraphen abgesehen. Stattdessen wurde ein bereits vorhandener Szenengraph namens OpenSceneGraph als Basis für die Entwicklung des verteilten Szenengraphen verwendet. Im Rahmen dieser Arbeit wurde eine Clusterunterstützung in den vorliegenden OpenSceneGraph integriert. Bei der Erweiterung des OpenSceneGraphs wurde besonders darauf geachtet den vorliegenden Szenengraphen möglichst nicht zu verändern. Zusätzlich wurde nach Möglichkeit auf die Verwendung und Integration externer Clusterbasierten Softwarepakete verzichtet. Für die Verteilung des OpenSceneGraphs wurde auf Basis von Sockets eine eigene Kommunikationsschicht entwickelt und in den OpenSceneGraph integriert. Diese Kommunikationsschicht wurde verwendet um Sort-First- und Sort-Last-basierte Visualisierung dem OpenSceneGraph zur Verfügung zu stellen. Durch die Erweiterung des OpenScenGraphs um die Cluster-Unterstützung wurde eine Ansteuerung beliebiger Projektionssysteme wie z.B. einer CAVE ermöglicht. Für die Ansteuerung einer CAVE wurden mittels VRPN diverse Eingabegeräte sowie das Tracking in den OpenSceneGraph integriert. Durch die Anbindung der Geräte über VRPN können diese Eingabegeräte auch bei den anderen Cluster-Betriebsarten wie z.B. einer segmentierten Anzeige verwendet werden. Die Verteilung der Daten auf den Cluster wurde von dem Kern des OpenSceneGraphs separat gehalten. Damit kann eine beliebige OpenSceneGraph-basierte Anwendung jederzeit und ohne aufwendige Modifikationen auf einem Cluster ausgeführt werden. Dadurch ist der Anwender in seiner Applikationsentwicklung nicht behindert worden und muss nicht zwischen Cluster-basierten und Standalone-Anwendungen unterscheiden.
Resumo:
Die zunehmende Vernetzung der Informations- und Kommunikationssysteme führt zu einer weiteren Erhöhung der Komplexität und damit auch zu einer weiteren Zunahme von Sicherheitslücken. Klassische Schutzmechanismen wie Firewall-Systeme und Anti-Malware-Lösungen bieten schon lange keinen Schutz mehr vor Eindringversuchen in IT-Infrastrukturen. Als ein sehr wirkungsvolles Instrument zum Schutz gegenüber Cyber-Attacken haben sich hierbei die Intrusion Detection Systeme (IDS) etabliert. Solche Systeme sammeln und analysieren Informationen von Netzwerkkomponenten und Rechnern, um ungewöhnliches Verhalten und Sicherheitsverletzungen automatisiert festzustellen. Während signatur-basierte Ansätze nur bereits bekannte Angriffsmuster detektieren können, sind anomalie-basierte IDS auch in der Lage, neue bisher unbekannte Angriffe (Zero-Day-Attacks) frühzeitig zu erkennen. Das Kernproblem von Intrusion Detection Systeme besteht jedoch in der optimalen Verarbeitung der gewaltigen Netzdaten und der Entwicklung eines in Echtzeit arbeitenden adaptiven Erkennungsmodells. Um diese Herausforderungen lösen zu können, stellt diese Dissertation ein Framework bereit, das aus zwei Hauptteilen besteht. Der erste Teil, OptiFilter genannt, verwendet ein dynamisches "Queuing Concept", um die zahlreich anfallenden Netzdaten weiter zu verarbeiten, baut fortlaufend Netzverbindungen auf, und exportiert strukturierte Input-Daten für das IDS. Den zweiten Teil stellt ein adaptiver Klassifikator dar, der ein Klassifikator-Modell basierend auf "Enhanced Growing Hierarchical Self Organizing Map" (EGHSOM), ein Modell für Netzwerk Normalzustand (NNB) und ein "Update Model" umfasst. In dem OptiFilter werden Tcpdump und SNMP traps benutzt, um die Netzwerkpakete und Hostereignisse fortlaufend zu aggregieren. Diese aggregierten Netzwerkpackete und Hostereignisse werden weiter analysiert und in Verbindungsvektoren umgewandelt. Zur Verbesserung der Erkennungsrate des adaptiven Klassifikators wird das künstliche neuronale Netz GHSOM intensiv untersucht und wesentlich weiterentwickelt. In dieser Dissertation werden unterschiedliche Ansätze vorgeschlagen und diskutiert. So wird eine classification-confidence margin threshold definiert, um die unbekannten bösartigen Verbindungen aufzudecken, die Stabilität der Wachstumstopologie durch neuartige Ansätze für die Initialisierung der Gewichtvektoren und durch die Stärkung der Winner Neuronen erhöht, und ein selbst-adaptives Verfahren eingeführt, um das Modell ständig aktualisieren zu können. Darüber hinaus besteht die Hauptaufgabe des NNB-Modells in der weiteren Untersuchung der erkannten unbekannten Verbindungen von der EGHSOM und der Überprüfung, ob sie normal sind. Jedoch, ändern sich die Netzverkehrsdaten wegen des Concept drif Phänomens ständig, was in Echtzeit zur Erzeugung nicht stationärer Netzdaten führt. Dieses Phänomen wird von dem Update-Modell besser kontrolliert. Das EGHSOM-Modell kann die neuen Anomalien effektiv erkennen und das NNB-Model passt die Änderungen in Netzdaten optimal an. Bei den experimentellen Untersuchungen hat das Framework erfolgversprechende Ergebnisse gezeigt. Im ersten Experiment wurde das Framework in Offline-Betriebsmodus evaluiert. Der OptiFilter wurde mit offline-, synthetischen- und realistischen Daten ausgewertet. Der adaptive Klassifikator wurde mit dem 10-Fold Cross Validation Verfahren evaluiert, um dessen Genauigkeit abzuschätzen. Im zweiten Experiment wurde das Framework auf einer 1 bis 10 GB Netzwerkstrecke installiert und im Online-Betriebsmodus in Echtzeit ausgewertet. Der OptiFilter hat erfolgreich die gewaltige Menge von Netzdaten in die strukturierten Verbindungsvektoren umgewandelt und der adaptive Klassifikator hat sie präzise klassifiziert. Die Vergleichsstudie zwischen dem entwickelten Framework und anderen bekannten IDS-Ansätzen zeigt, dass der vorgeschlagene IDSFramework alle anderen Ansätze übertrifft. Dies lässt sich auf folgende Kernpunkte zurückführen: Bearbeitung der gesammelten Netzdaten, Erreichung der besten Performanz (wie die Gesamtgenauigkeit), Detektieren unbekannter Verbindungen und Entwicklung des in Echtzeit arbeitenden Erkennungsmodells von Eindringversuchen.
Resumo:
Introducción: Las condiciones de trabajo y salud de la población trabajadora han sido estudiadas en los diferentes sectores económicos, y para esto se han aplicado diversos instrumentos para la identificación de los factores de riesgo. Sin embargo, el estudio en los trabajadores de las instituciones de educación superior, aún es escaso y limitado. Este trabajo de investigación abordó las condiciones de trabajo y salud y los factores de riesgo asociados de los empleados de laboratorios y de talleres de las diferentes unidades de una Universidad privada en Bogotá D.C. Objetivo: Determinar las condiciones de trabajo y salud del personal de laboratorios y de talleres en el 2014 de las diferentes unidades de una Universidad privada en Bogotá D.C. Métodos: Se realizó un estudio de corte transversal, en una muestra de 87 trabajadores de laboratorios y de talleres. En el análisis univariado se calcularon para las variables cualitativas frecuencias y proporciones, y para las variables cuantitativas medidas de tendencia central y dispersión acorde a la distribución de los datos. Para el análisis bivariado, se realizó la comparación de las prevalencias entre los factores de riesgo locativos, físicos, químicos, biológicos, biomecánicos y psicológicos de cada unidad y las condiciones de trabajo y salud; así como su asociación con los factores sociodemográficos y laborales, por medio de la prueba de asociación Chi-Cuadrado de Pearson. El nivel de significancia establecido fue p<0.05. Resultados: Se encontró que las variables respiración de sustancias y manipulación manual de cargas se encuentran asociadas al cargo de los trabajadores de laboratorios y de talleres. También de la muestra de trabajadores la mayoría infirió no tener una exposición significativa a los riesgos locativo, físico, biológico y psicosocial; mientras que para riesgo químico y biomecánico se encontraron frecuencias de más del 60% entre los participantes. Los hallazgos de este estudio corresponden únicamente a la percepción de los trabajadores frente a sus condiciones de trabajo y salud, puesto que no se realizaron mediciones higiénicas ni exámenes médicos a los participantes. Conclusiones: En este estudio se encontró que la exposición a riesgos locativo, físico, biológico y psicosocial es baja; mientras que es alta para riesgo químico debida a la manipulación e inhalación de sustancias químicas tóxicas, y a riesgo biomecánico a causa de las posturas prolongadas y mantenidas, la manipulación manual de cargas, los movimientos repetitivos y realizar fuerzas. En relación a las condiciones de salud mental, se encontró que la población no presenta algún estado emocional o psicológico que llame la atención, lo que en consecuencia se refleja en el buen estado de salud físico.
Resumo:
Purpose – As online retailing grows in importance there is increasing interest in the online customer experience. The purpose of this paper is to explore the role of gamification, the use of game mechanics, in enabling consumer engagement with online retailers. Design/methodology/approach – The research adopts a qualitative methodology carrying out 19 in-depth interviews with individuals who are frequent online shoppers. Findings – Findings support the importance of including game elements to enhance the retail experience. However, data also suggests that without appropriate management customers can subvert gamification strategies to create their own “games” which increases competitive pressure between retailers. Practical implications – The paper suggests ways in which retailers might more successfully “gamify” their online retail stores and reduce incidences of undesirable customer behaviour. Originality/value – This paper provides empirical support to the current paucity of research into the role of gamification in the context of the online retail experience.
Resumo:
Purpose – While the charity retail literature emphasizes the richness of human resource practices among charity retailers, it rarely makes the link between these practices and their interest for establishing charity retailers' brands. Simultaneously, while the retail branding literature increasingly emphasizes the central role of human resource practices for retail branding, it rarely explains how retailers should conduct such practices. The purpose of this study is to test the recent model proposed by Burt and Sparks in 2002 (the “fifth generation of retail branding”) which proposes that a retail brand depends on the alignment between a retailer's substance (vision and culture) and its perceived image by customers. Design/methodology/approach – The research is based on an ethnographic study conducted within the Oxfam Trading Division, GB from October to December 2002. Findings – The study supports the Burt and Spark's model and makes explicit the practice of human resource for branding. The study demonstrates that it was the alignment between the vision of Oxfam's top management and its new customer‐oriented culture, two elements of its core substance mediated to customers by store employees, which has enabled an improved customers' perception of the brand. The study also seeks to elaborate upon the Burt and Spark's model by specifying an ascending feedback loop starting from customers' perception of Oxfam brand and enabling the creation of a suitable culture and vision again mediated by store employees. Research limitations/implications – New research should explore whether and how retailers create synergies between human resource and marketing functions to sustain their brand image. Practical implications – If the adoption of business practices by charity retailers is often discussed, this study highlights that commercial retailers could usefully transfer human resource best practices from leading charity retailers to develop their retail brand. Originality/value – The paper is of value to commercial retailers.
Resumo:
For many years, drainage design was mainly about providing sufficient network capacity. This traditional approach had been successful with the aid of computer software and technical guidance. However, the drainage design criteria had been evolving due to rapid population growth, urbanisation, climate change and increasing sustainability awareness. Sustainable drainage systems that bring benefits in addition to water management have been recommended as better alternatives to conventional pipes and storages. Although the concepts and good practice guidance had already been communicated to decision makers and public for years, network capacity still remains a key design focus in many circumstances while the additional benefits are generally considered secondary only. Yet, the picture is changing. The industry begins to realise that delivering multiple benefits should be given the top priority while the drainage service can be considered a secondary benefit instead. The shift in focus means the industry has to adapt to new design challenges. New guidance and computer software are needed to assist decision makers. For this purpose, we developed a new decision support system. The system consists of two main components – a multi-criteria evaluation framework for drainage systems and a multi-objective optimisation tool. Users can systematically quantify the performance, life-cycle costs and benefits of different drainage systems using the evaluation framework. The optimisation tool can assist users to determine combinations of design parameters such as the sizes, order and type of drainage components that maximise multiple benefits. In this paper, we will focus on the optimisation component of the decision support framework. The optimisation problem formation, parameters and general configuration will be discussed. We will also look at the sensitivity of individual variables and the benchmark results obtained using common multi-objective optimisation algorithms. The work described here is the output of an EngD project funded by EPSRC and XP Solutions.
Resumo:
Esta dissertação se propõe a cartografar as redes sociotécnicas do design no campo do management nos moldes propostos pela Teoria Ator-Rede e apresentar o processo de translação pelo qual passou o termo ao adentrar no campo. Para tal, levantou e analisou artigos publicados sobre o tema nos principais periódicos da área de organizações e publicações. Estes textos demonstram como, nas últimas décadas, o design tem passado por uma expansão de sentido e aplicação na direção do management (ou do management no sentido do design), através das abordagens denominadas design thinking, design science ou design process. A pesquisa se justifica, uma vez que este assunto está presente nos principais periódicos do management e dos estudos organizacionais, como uma importante ferramenta para solução de problemas que desafiam os sistemas organizacionais, como: a mudança, o empreendedorismo e a inovação (Stephens & Boland, 2014). É importante destacar que o design tem sido cada vez mais considerado uma atividade decisiva na batalha econômica (Callon, 1986), na determinação dos atuais estilos de vida (lifestyle) e na construção de nosso mundo futuro. No campo dos estudos organizacionais, como demonstrou esta pesquisa, o design surge como uma abordagem que supera a dicotomia entre positivismo e a abordagem crítica na teoria organizacional (Jelinek, Romme & Boland, 2008). Por fim, esta dissertação se ateve à cartografia das redes sociotécnicas e à descrição das quatro principais fases do processo de translação do design no campo do management, a saber: (a) problematização, marcada pela publicação de The Sciences of Artificial em 1969 de Herbert A. Simon, no qual, ele argumenta pelo design como uma habilidade básica para todas as especialidades profissionais, incluindo a gestão (Simon, 1996), (b) interessamento, designers defendendo um design de sistemas complexos como as organizações, (c) engajamento, designers e teóricos das organizações juntos pelo design no management como uma alternativa para a superação da dicotomia entre positivismo e os estudos críticos na administração, e, (d) mobilização, na qual os teóricos das organizações partem em defesa do design no management como um forma de dar conta de modelos organizacionais contemporâneos com fronteiras mais permeáveis e em constante reformulação
Resumo:
The IT capability is a organizational ability to perform activities of this role more effectively and an important mechanism in creating value. Its building process (stages of creation and development) occurs through management initiatives for improvement in the performance of their activities, using human resources and IT assets complementary responsible for the evolution of their organizational routines. This research deals with the IT capabilities related to SIG (integrated institutional management systems), built and deployed in UFRN (Universidade Federal do Rio Grande do Norte) to realization and control of administrative, academic and human resources activities. Since 2009, through cooperative agreements with federal and educational institutions of direct administration, UFRN has supported the implementation of these systems, currently involving more than 30 institutions. The present study aims to understand how IT capabilities, relevant in the design, implementation and dissemination of SIG, were built over time. This is a single case study of qualitative and longitudinal nature, performed by capturing, coding and analysis from secondary data and from semi-structured interviews conducted primarily with members of Superintenência de Informática, organizational unit responsible for SIG systems in UFRN. As a result, the technical, of internal relationship and external cooperation capabilities were identified as relevant in the successful trajectory of SIG systems, which have evolved in different ways. The technical capacity, initiated in 2004, toured the stages of creation and development until it reached the stage of stability in 2013, due to technological limits. Regarding the internal relationship capability, begun in 2006, it toured the stages of creation and development, having extended its scope of activities in 2009, being in development since then. Unlike the standard life cycle observed in the literature, the external cooperation capability was initiated by an intensity of initiatives and developments in the routines in 2009, which were decreasing to cease in 2013 in order to stabilize the technological infrastructure already created for cooperative institutions. It was still identified the start of cooperation in 2009 as an important event selection, responsible for changing or creating trajectories of evolution in all three capacities. The most frequent improvements initiatives were of organizational nature and the internal planning activity has been transformed over the routines of the three capabilities. Important resources and complementary assets have been identified as important for the realization of initiatives, such as human resources technical knowledge to the technical capabilities and external cooperation, and business knowledge, for all of them, as well as IT assets: the iproject application for control of development processes, and the document repository wiki. All these resources and complementary assets grew along the capacities, demonstrating its strategic value to SINFO/UFRN
Resumo:
This Project aims to develop methods for data classification in a Data Warehouse for decision-making purposes. We also have as another goal the reduction of an attribute set in a Data Warehouse, in which a given reduced set is capable of keeping the same properties of the original one. Once we achieve a reduced set, we have a smaller computational cost of processing, we are able to identify non-relevant attributes to certain kinds of situations, and finally we are also able to recognize patterns in the database that will help us to take decisions. In order to achieve these main objectives, it will be implemented the Rough Sets algorithm. We chose PostgreSQL as our data base management system due to its efficiency, consolidation and finally, it’s an open-source system (free distribution)
Resumo:
The wide diffusion of cheap, small, and portable sensors integrated in an unprecedented large variety of devices and the availability of almost ubiquitous Internet connectivity make it possible to collect an unprecedented amount of real time information about the environment we live in. These data streams, if properly and timely analyzed, can be exploited to build new intelligent and pervasive services that have the potential of improving people's quality of life in a variety of cross concerning domains such as entertainment, health-care, or energy management. The large heterogeneity of application domains, however, calls for a middleware-level infrastructure that can effectively support their different quality requirements. In this thesis we study the challenges related to the provisioning of differentiated quality-of-service (QoS) during the processing of data streams produced in pervasive environments. We analyze the trade-offs between guaranteed quality, cost, and scalability in streams distribution and processing by surveying existing state-of-the-art solutions and identifying and exploring their weaknesses. We propose an original model for QoS-centric distributed stream processing in data centers and we present Quasit, its prototype implementation offering a scalable and extensible platform that can be used by researchers to implement and validate novel QoS-enforcement mechanisms. To support our study, we also explore an original class of weaker quality guarantees that can reduce costs when application semantics do not require strict quality enforcement. We validate the effectiveness of this idea in a practical use-case scenario that investigates partial fault-tolerance policies in stream processing by performing a large experimental study on the prototype of our novel LAAR dynamic replication technique. Our modeling, prototyping, and experimental work demonstrates that, by providing data distribution and processing middleware with application-level knowledge of the different quality requirements associated to different pervasive data flows, it is possible to improve system scalability while reducing costs.
Resumo:
An understanding of interruptions in healthcare is important for the design, implementation, and evaluation of health information systems and for the management of clinical workflow and medical errors. The purpose of this study is to identify and classify the types of interruptions experienced by Emergency Department(ED) nurses working in a Level One Trauma Center. This was an observational field study of Registered Nurses (RNs) employed in a Level One Trauma Center using the shadowing method. Results of the study indicate that nurses were both recipients and initiators of interruptions. Telephones, pagers, and face-to-face conversations were the most common sources of interruptions. Unlike other industries, the healthcare community has not systematically studied interruptions in clinical settings to determine and weigh the necessity of the interruption against their sometimes negative results such as medical errors, decreased efficiency, and increased costs. Our study presented here is an initial step to understand the nature, causes, and effects of interruptions, thereby improving both the quality of healthcare and patient safety. We developed an ethnographic data collection technique and a data coding method for the capturing and analysis of interruptions. The interruption data we collected are systematic, comprehensive, and close to exhaustive. They confirmed the findings from earlier studies by other researchers that interruptions are frequent events in critical care and other healthcare settings. We are currently using these data to analyze the workflow dynamics of ED clinicians, to identify the bottlenecks of information flow, and to develop interventions to improve the efficiency of emergency care through the management of interruptions.
Resumo:
An understanding of interruptions in healthcare is important for the design, implementation, and evaluation of health information systems and for the management of clinical workflow and medical errors. The purpose of this study is to identify and classify the types of interruptions experienced by ED nurses working in a Level One Trauma Center. This was an observational field study of Registered Nurses employed in a Level One Trauma Center using the shadowing method. Results of the study indicate that nurses were both recipients and initiators of interruptions. Telephone, pagers, and face-to-face conversations were the most common sources of interruptions. Unlike other industries, the outcomes caused by interruptions resulting in medical errors, decreased efficiency and increased cost have not been systematically studied in healthcare. Our study presented here is an initial step to understand the nature, causes, and effects of interruptions, and to develop interventions to manage interruptions to improve healthcare quality and patient safety. We developed an ethnographic data collection technique and a data coding method for the capturing and analysis of interruptions. The interruption data we collected are systematic, comprehensive, and close to exhaustive. They confirmed the findings from early studies by other researchers that interruptions are frequent events in critical care and other healthcare settings. We are currently using these data to analyze the workflow dynamics of ED clinicians, identify the bottlenecks of information flow, and develop interventions to improve the efficiency of emergency care through the management of interruptions.
Resumo:
Recently it has been proposed that the evaluation of effects of pollutants on aquatic organisms can provide an early warning system of potential environmental and human health risks (NRC 1991). Unfortunately there are few methods available to aquatic biologists to conduct assessments of the effects of pollutants on aquatic animal community health. The primary goal of this research was to develop and evaluate the feasibility of such a method. Specifically, the primary objective of this study was to develop a prototype rapid bioassessment technique similar to the Index of Biotic Integrity (IBI) for the upper Texas and Northwestern Gulf of Mexico coastal tributaries. The IBI consists of a series of "metrics" which describes specific attributes of the aquatic community. Each of these metrics are given a score which is then subtotaled to derive a total assessment of the "health" of the aquatic community. This IBI procedure may provide an additional assessment tool for professionals in water quality management.^ The experimental design consisted primarily of compiling previously collected data from monitoring conducted by the Texas Natural Resource Conservation Commission (TNRCC) at five bayous classified according to potential for anthropogenic impact and salinity regime. Standardized hydrological, chemical, and biological monitoring had been conducted in each of these watersheds. The identification and evaluation of candidate metrics for inclusion in the estuarine IBI was conducted through the use of correlation analysis, cluster analysis, stepwise and normal discriminant analysis, and evaluation of cumulative distribution frequencies. Scores of each included metric were determined based on exceedances of specific percentiles. Individual scores were summed and a total IBI score and rank for the community computed.^ Results of these analyses yielded the proposed metrics and rankings listed in this report. Based on the results of this study, incorporation of an estuarine IBI method as a water quality assessment tool is warranted. Adopted metrics were correlated to seasonal trends and less so to salinity gradients observed during the study (0-25 ppt). Further refinement of this method is needed using a larger more inclusive data set which includes additional habitat types, salinity ranges, and temporal variation. ^
Resumo:
Introduction: International and national sports federations as well as their member organisations (usually sports clubs) are key actors within the sports system and have a wide range of relationships outside the sports system (e.g. with the state, sponsors, and the media). They are currently facing major challenges such as growing competition in top-‐level sports, democratisation of sports with “sports for all” and sports as the answer to social problems (integration, education, health, unemployment, etc.). In this context, professionalising sports organisations seems to be an appropriate strategy to face these challenges and solve current problems. This has led to a profound organisational change, particularly within sports federations, characterised by the strengthening of institutional management (managerialism) and the implementation of efficiency-‐based management instruments and paid staff. In this context the questions arise how sports organisations professionalise and what consequences this may have. Theoretical framework: The goal of our presentation is to review the international literature and develop an appropriate concept of professionalisation in sport federations. Our multi-‐level approach based on social theory of action integrates the current concepts and perspectives for analysing professionalisation in sports federations. We specify the framework for the following research perspectives: (1) forms, (2) causes and mechanisms, (3) consequences and (4) dynamics, and discuss the reciprocal relations between sports federations and their member organisations in this context. When analysing these different research perspectives, it is important to select or elaborate appropriate theoretical concepts to match the general multi-‐level framework Discussion: The elaborated multi-‐level framework for analysing professionalisation in sports federations is able to integrate most of the existing theoretical concepts and therefore, the broad range of endogenous as well as exogenous factors that might influence the professionalisation of sports organisations. Based on the theoretical framework, we can identify several consequences for the methodological design of studies intending to analyse the different perspectives of professionalisation in sports organisations: Data have to be collected on the different levels. Not only the forms of professionalisation and relevant structures of the organisations should be taken into account but also important characteristics of the environment (macro level) as well as members or member organisations, particularly key actors who might play a crucial role in gaining an understanding of professionalisation processes in sports organisations. In order to carry out a complex organisational research design, it seems necessary to focus on case studies – an approach that has become increasingly important in organisational research. Different strategies and methods of data collection have to be used within the case studies (e.g. interviews with experts within the organisations, questionnaire for selected people in the organisation, document analysis). Therefore, qualitative and quantitative research strategies have to be combined.
Resumo:
En minería, la estimación de la curva granulométrica del escombro de voladura es importante para evaluar el diseño, ejecución y optimización de la misma. Para ello, actualmente se usan sistemas digitales de fotografías que obtienen dicha curva a partir de imágenes tomadas por una cámara. En este proyecto se ha analizado la fragmentación de seis voladuras realizadas en el año 2012 en la cantera “El Aljibe” situada en el término municipal de Almonacid de Toledo con un sistema automático en línea (Split Online) y con un software de otra compañía (WipFrag) que permite la edición manual de las imágenes. Han sido analizadas 120 imágenes de seis voladuras, elegidas aleatoriamente. Tras el estudio granulométrico, se observa que las curvas granulométricas obtenidas con ambos sistemas, estadísticamente, no son la misma en la mayor parte de la curva, por tanto, se analiza una posible relación entre los tamaños característicos X50 y X80, llegando a la conclusión de que ninguno de los sistemas es totalmente fiable, y es necesario calibrar los sistemas con datos de fragmentación reales obtenidos por medio de básculas. Abstract In mining, the estimate of the granulometric curve blasting debris is very important to evaluate the design, implementation and optimization of it. Currently, for the obtaining of this curves are used digital system of pictures taken by a camera. In this project, the fragmentation of six rock blasting were analyzed. The rock blastings are executed in 2012 in the quarry “El Aljibe” located in Almonacid de Toledo, with a automatic online system (Split Online) and a manual editing software (WipFrag). 120 randomly selected pictures have been analyzed. After the granulometric study, it appears that the size distribution curves obtained with both systems, statistically, are not the same, then, a possible relationship between the feature sizes X50 and X80 is analyzed, concluding that none of the systems is fully reliable, and systems must be calibrated with real data fragmentation obtained from data scales.