892 resultados para Process control -- Data processing
Resumo:
The development of the Internet has made it possible to transfer data ‘around the globe at the click of a mouse’. Especially fresh business models such as cloud computing, the newest driver to illustrate the speed and breadth of the online environment, allow this data to be processed across national borders on a routine basis. A number of factors cause the Internet to blur the lines between public and private space: Firstly, globalization and the outsourcing of economic actors entrain an ever-growing exchange of personal data. Secondly, the security pressure in the name of the legitimate fight against terrorism opens the access to a significant amount of data for an increasing number of public authorities.And finally,the tools of the digital society accompany everyone at each stage of life by leaving permanent individual and borderless traces in both space and time. Therefore, calls from both the public and private sectors for an international legal framework for privacy and data protection have become louder. Companies such as Google and Facebook have also come under continuous pressure from governments and citizens to reform the use of data. Thus, Google was not alone in calling for the creation of ‘global privacystandards’. Efforts are underway to review established privacy foundation documents. There are similar efforts to look at standards in global approaches to privacy and data protection. The last remarkable steps were the Montreux Declaration, in which the privacycommissioners appealed to the United Nations ‘to prepare a binding legal instrument which clearly sets out in detail the rights to data protection and privacy as enforceable human rights’. This appeal was repeated in 2008 at the 30thinternational conference held in Strasbourg, at the 31stconference 2009 in Madrid and in 2010 at the 32ndconference in Jerusalem. In a globalized world, free data flow has become an everyday need. Thus, the aim of global harmonization should be that it doesn’t make any difference for data users or data subjects whether data processing takes place in one or in several countries. Concern has been expressed that data users might seek to avoid privacy controls by moving their operations to countries which have lower standards in their privacy laws or no such laws at all. To control that risk, some countries have implemented special controls into their domestic law. Again, such controls may interfere with the need for free international data flow. A formula has to be found to make sure that privacy at the international level does not prejudice this principle.
Resumo:
Applying location-focused data protection law within the context of a location-agnostic cloud computing framework is fraught with difficulties. While the Proposed EU Data Protection Regulation has introduced a lot of changes to the current data protection framework, the complexities of data processing in the cloud involve various layers and intermediaries of actors that have not been properly addressed. This leaves some gaps in the regulation when analyzed in cloud scenarios. This paper gives a brief overview of the relevant provisions of the regulation that will have an impact on cloud transactions and addresses the missing links. It is hoped that these loopholes will be reconsidered before the final version of the law is passed in order to avoid unintended consequences.
Resumo:
In the long run, the widespread use of slide scanners by pathologists requires an adaptation of teaching methods in histology and cytology in order to target these new possibilities of image processing and presentation via the internet. Accordingly, we were looking for a tool with the possibility to teach microscopic anatomy, histology, and cytology of tissue samples which would be able to combine image data from light and electron microscopes independently of microscope suppliers. With the example of a section through the villus of jejunum, we describe here how to process image data from light and electron microscopes in order to get one image-stack which allows a correlation of structures from the microscopic anatomic to the cytological level. With commercially available image-presentation software that we adapted to our needs, we present here a platform which allows for the presentation of this new but also of older material independently of microscope suppliers.
Resumo:
Detector uniformity is a fundamental performance characteristic of all modern gamma camera systems, and ensuring a stable, uniform detector response is critical for maintaining clinical images that are free of artifact. For these reasons, the assessment of detector uniformity is one of the most common activities associated with a successful clinical quality assurance program in gamma camera imaging. The evaluation of this parameter, however, is often unclear because it is highly dependent upon acquisition conditions, reviewer expertise, and the application of somewhat arbitrary limits that do not characterize the spatial location of the non-uniformities. Furthermore, as the goal of any robust quality control program is the determination of significant deviations from standard or baseline conditions, clinicians and vendors often neglect the temporal nature of detector degradation (1). This thesis describes the development and testing of new methods for monitoring detector uniformity. These techniques provide more quantitative, sensitive, and specific feedback to the reviewer so that he or she may be better equipped to identify performance degradation prior to its manifestation in clinical images. The methods exploit the temporal nature of detector degradation and spatially segment distinct regions-of-non-uniformity using multi-resolution decomposition. These techniques were tested on synthetic phantom data using different degradation functions, as well as on experimentally acquired time series floods with induced, progressively worsening defects present within the field-of-view. The sensitivity of conventional, global figures-of-merit for detecting changes in uniformity was evaluated and compared to these new image-space techniques. The image-space algorithms provide a reproducible means of detecting regions-of-non-uniformity prior to any single flood image’s having a NEMA uniformity value in excess of 5%. The sensitivity of these image-space algorithms was found to depend on the size and magnitude of the non-uniformities, as well as on the nature of the cause of the non-uniform region. A trend analysis of the conventional figures-of-merit demonstrated their sensitivity to shifts in detector uniformity. The image-space algorithms are computationally efficient. Therefore, the image-space algorithms should be used concomitantly with the trending of the global figures-of-merit in order to provide the reviewer with a richer assessment of gamma camera detector uniformity characteristics.
Resumo:
Volunteers are the most important resource for non-profit sport clubs seeking to bolster their viability (e.g. sporting programs). Although many people do voluntary work in sport clubs, stable voluntary engagement can no longer be granted. This difficulty is confirmed by existing research across various European countries. From a club management point of view, a detailed understanding of how to attract volunteers and retain them in the long term is becoming a high priority. The purpose of this study is (1) to analyse the influence of individual characteristics and corresponding organisational conditions on volunteering in sports clubs as well as (2) to examine the decision-making processes in relation to implement effective strategies for recruiting volunteers. For the first perspective a multi-level framework for the investigation of the factors of voluntary engagement in sports clubs is developed. The individual and context factors are estimated in different multi-level models based on a sample of n = 1,434 sport club members from 36 sport clubs in Switzerland. Results indicate that volunteering is not just an outcome of individual characteristics such as lower workloads, higher income, children belonging to the sport club, longer club memberships, or a strong commitment to the club. It is also influenced by club-specific structural conditions; volunteering is more probable in rural sports clubs whereas growth-oriented goals in clubs have a destabilising effect. Concerning decision-making processes an in-depth analysis of recruitment practices for volunteers was conducted in nine selected sport clubs (case study design) based on the garbage can model. Results show that the decision-making processes are generally characterised by a reactive approach in which dominant actors try to handle personnel problems of recruitment in the administration and sport domains through routine formal committee work and informal networks. In addition, it proved possible to develop a typology that deliver an overview of different decision-making practices in terms of the specific interplay of the relevant components of process control (top-down vs. bottom-up) and problem processing (situational vs. systematic). Based on the findings some recommendations for volunteer management in sport clubs are worked out.
Resumo:
Effective strategies for recruiting volunteers who are prepared to make a long-term commitment to formal positions are essential for the survival of voluntary sport clubs. This article examines the decision-making processes in relation to these efforts. Under the assumption of bounded rationality, the garbage can model is used to grasp these decision-making processes theoretically and access them empirically. Based on case study framework an in-depth analysis of recruitment practices was conducted in nine selected sport clubs. Results showed that the decision-making processes are generally characterized by a reactive approach in which dominant actors try to handle personnel problems of recruitment in the administration and sport domains through routine formal committee work and informal networks. In addition, it proved possible to develop a typology that deliver an overview of different decision-making practices in terms of the specific interplay of the relevant components of process control (top-down vs. bottom-up) and problem processing (situational vs. systematic).
Resumo:
As the number of space debris is increasing in the geostationary ring, it becomes mandatory for any satellite operator to avoid any collisions. Space debris in geosynchronous orbits may be observed with optical telescopes. Other than radar, that requires very large dishes and transmission powers for sensing high-altitude objects, optical observations do not depend on active illumination from ground and may be performed with notably smaller apertures. The detection size of an object depends on the aperture of the telescope, sky background and exposure time. With a telescope of 50 cm aperture, objects down to approximately 50 cm may be observed. This size is regarded as a threshold for the identification of hazardous objects and the prevention of potentially catastrophic collisions in geostationary orbits. In collaboration with the Astronomical Institute of the University of Bern (AIUB), the German Space Operations Center (GSOC) is building a small aperture telescope to demonstrate the feasibility of optical surveillance of the geostationary ring. The telescope will be located in the southern hemisphere and complement an existing telescope in the northern hemisphere already operated by AIUB. These two telescopes provide an optimum coverage of European GEO satellites and enable a continuous monitoring independent of seasonal limitations. The telescope will be operated completely automatically. The automated operations should be demonstrated covering the full range of activities including scheduling of observations, telescope and camera control as well as data processing.
Resumo:
Space debris in geostationary orbits may be detected with optical telescopes when the objects are illuminated by the Sun. The advantage compared to Radar can be found in the illumination: radar illuminates the objects and thus the detection sensitivity depletest proportional to the fourth power of the d istance. The German Space Operation Center, GSOC, together with the Astronomical Institute of the University of Bern, AIUB, are setting up a telescope system called SMARTnet to demonstrate the capability of performing geostationary surveillance. Such a telescope system will consist of two telescopes on one mount: a smaller telescope with an aperture of 20cm will serve for fast survey while the larger one, a telescope with an aperture of 50cm, will be used for follow-up observations. The telescopes will be operated by GSOC from Oberpfaffenhofen by the internal monitoring and control system called SMARTnetMAC. The observation plan will be generated by MARTnetPlanning seven days in advance by applying an optimized planning scheduler, taking into account fault time like cloudy nights, priority of objects etc. From each picture taken, stars will be identified and everything not being a star is treated as a possible object. If the same object can be identified on multiple pictures within a short time span, the trace is called a tracklet. In the next step, several tracklets will be correlated to identify individual objects, ephemeris data for these objects are generated and catalogued . This will allow for services like collision avoidance to ensure safe operations for GSOC’s satellites. The complete data processing chain is handled by BACARDI, the backbone catalogue of relational debris information and is presented as a poster.
Resumo:
A wide variety of spatial data collection efforts are ongoing throughout local, state and federal agencies, private firms and non-profit organizations. Each effort is established for a different purpose but organizations and individuals often collect and maintain the same or similar information. The United States federal government has undertaken many initiatives such as the National Spatial Data Infrastructure, the National Map and Geospatial One-Stop to reduce duplicative spatial data collection and promote the coordinated use, sharing, and dissemination of spatial data nationwide. A key premise in most of these initiatives is that no national government will be able to gather and maintain more than a small percentage of the geographic data that users want and desire. Thus, national initiatives depend typically on the cooperation of those already gathering spatial data and those using GIs to meet specific needs to help construct and maintain these spatial data infrastructures and geo-libraries for their nations (Onsrud 2001). Some of the impediments to widespread spatial data sharing are well known from directly asking GIs data producers why they are not currently involved in creating datasets that are of common or compatible formats, documenting their datasets in a standardized metadata format or making their datasets more readily available to others through Data Clearinghouses or geo-libraries. The research described in this thesis addresses the impediments to wide-scale spatial data sharing faced by GIs data producers and explores a new conceptual data-sharing approach, the Public Commons for Geospatial Data, that supports user-friendly metadata creation, open access licenses, archival services and documentation of parent lineage of the contributors and value- adders of digital spatial data sets.
Resumo:
Children and adults frequently skip breakfast and rates are currently increasing. In addition, the food choices made for breakfast are not always healthy ones. Breakfast skipping, in conjunction with unhealthy breakfast choices, leads to impaired cognitive functioning, poor nutrient intake, and overweight. In response to these public health issues, Skip To Breakfast, a behaviorally based school and family program, was created to increase consistent and healthful breakfast consumption among ethnically diverse fifth grade students and their families, using Intervention Mapping™. Four classroom lessons and four parent newsletters were used to deliver the intervention. For this project, a healthy, "3 Star Breakfast" was promoted, and included a serving each of dairy product, whole grain, and fruit, each with an emphasis on being low in fat and sugar. The goal of this project was to evaluate the feasibility and acceptability of the intervention. A pilot-test of the intervention was conducted in one classroom, in a school in Houston, during the Fall 2007 semester. A qualitative evaluation of the intervention was conducted, which included focus groups with students, phone interviews of parents, process evaluation data from the classroom teacher, and direct observation. Sixteen students and six parents participated in the study. Data were recorded and themes were identified. Initial results showed there is a need for such programs. Based on the initial feedback, edits were made to the intervention and program. Results showed high acceptability among the teacher, students, and parents. It became apparent that students were not reliably getting the parent newsletters to their parents to read, so a change to the protocol was made, in which students will receive incentives for having parents read newsletters and return signed forms, to increase parent participation. Other changes included small modifications to the curriculum, such as, clarifying instructions, changing in-class assignments to homework assignments, and including background reading materials for the teacher. The main trial is planned to be carried out in Spring 2008, in two elementary schools, utilizing four, fifth grade classes from each, with one school acting as the control and one as the intervention school. Results from this study can be used as an adjunct to the Coordinated Approach To Child Health (CATCH) program. ^