812 resultados para 350202 Business Information Systems (incl. Data Processing)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this work was to study fragmentation of forest formations (mesophytic forest, riparian woodland and savannah vegetation (cerrado)) in a 15,774-ha study area located in the Municipal District of Botucatu in Southeastern Brazil (São Paulo State). A land use and land cover map was made from a color composition of a Landsat-5 thematic mapper (TM) image. The edge effect caused by habitat fragmentation was assessed by overlaying, on a geographic information system (GIS), the land use and land cover data with the spectral ratio. The degree of habitat fragmentation was analyzed by deriving: 1. mean patch area and perimeter; 2. patch number and density; 3. perimeter-area ratio, fractal dimension (D), and shape diversity index (SI); and 4. distance between patches and dispersion index (R). In addition, the following relationships were modeled: 1. distribution of natural vegetation patch sizes; 2. perimeter-area relationship and the number and area of natural vegetation patches; 3. edge effect caused by habitat fragmentation, the values of R indicated that savannah patches (R = 0.86) were aggregated while patches of natural vegetation as a whole (R = 1.02) were randomly dispersed in the landscape. There was a high frequency of small patches in the landscape whereas large patches were rare. In the perimeter-area relationship, there was no sign of scale distinction in the patch shapes, In the patch number-landscape area relationship, D, though apparently scale-dependent, tends to be constant as area increases. This phenomenon was correlated with the tendency to reach a constant density as the working scale was increased, on the edge effect analysis, the edge-center distance was properly estimated by a model in which the edge-center distance was considered a function of the to;al patch area and the SI. (C) 1997 Elsevier B.V. B.V.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The CMS Collaboration conducted a month-long data taking exercise, the Cosmic Run At Four Tesla, during October-November 2008, with the goal of commissioning the experiment for extended operation. With all installed detector systems participating, CMS recorded 270 million cosmic ray events with the solenoid at a magnetic field strength of 3.8 T. This paper describes the data flow from the detector through the various online and offline computing systems, as well as the workflows used for recording the data, for aligning and calibrating the detector, and for analysis of the data. © 2010 IOP Publishing Ltd and SISSA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ability to utilize information systems (IS) effectively is becoming a necessity for business professionals. However, individuals differ in their abilities to use IS effectively, with some achieving exceptional performance in IS use and others being unable to do so. Therefore, developing a set of skills and attributes to achieve IS user competency, or the ability to realize the fullest potential and the greatest performance from IS use, is important. Various constructs have been identified in the literature to describe IS users with regard to their intentions to use IS and their frequency of IS usage, but studies to describe the relevant characteristics associated with highly competent IS users, or those who have achieved IS user competency, are lacking. This research develops a model of IS user competency by using the Repertory Grid Technique to identify a broad set of characteristics of highly competent IS users. A qualitative analysis was carried out to identify categories and sub-categories of these characteristics. Then, based on the findings, a subset of the model of IS user competency focusing on the IS-specific factors – domain knowledge of and skills in IS, willingness to try and to explore IS, and perception of IS value – was developed and validated using the survey approach. The survey findings suggest that all three factors are relevant and important to IS user competency, with willingness to try and to explore IS being the most significant factor. This research generates a rich set of factors explaining IS user competency, such as perception of IS value. The results not only highlight characteristics that can be fostered in IS users to improve their performance with IS use, but also present research opportunities for IS training and potential hiring criteria for IS users in organizations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The need to effectively manage the documentation covering the entire production process, from the concept phase right through to market realise, constitutes a key issue in the creation of a successful and highly competitive product. For almost forty years the most commonly used strategies to achieve this have followed Product Lifecycle Management (PLM) guidelines. Translated into information management systems at the end of the '90s, this methodology is now widely used by companies operating all over the world in many different sectors. PLM systems and editor programs are the two principal types of software applications used by companies for their process aotomation. Editor programs allow to store in documents the information related to the production chain, while the PLM system stores and shares this information so that it can be used within the company and made it available to partners. Different software tools, which capture and store documents and information automatically in the PLM system, have been developed in recent years. One of them is the ''DirectPLM'' application, which has been developed by the Italian company ''Focus PLM''. It is designed to ensure interoperability between many editors and the Aras Innovator PLM system. In this dissertation we present ''DirectPLM2'', a new version of the previous software application DirectPLM. It has been designed and developed as prototype during the internship by Focus PLM. Its new implementation separates the abstract logic of business from the real commands implementation, previously strongly dependent on Aras Innovator. Thanks to its new design, Focus PLM can easily develop different versions of DirectPLM2, each one devised for a specific PLM system. In fact, the company can focus the development effort only on a specific set of software components which provides specialized functions interacting with that particular PLM system. This allows shorter Time-To-Market and gives the company a significant competitive advantage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A post classification change detection technique based on a hybrid classification approach (unsupervised and supervised) was applied to Landsat Thematic Mapper (TM), Landsat Enhanced Thematic Plus (ETM+), and ASTER images acquired in 1987, 2000 and 2004 respectively to map land use/cover changes in the Pic Macaya National Park in the southern region of Haiti. Each image was classified individually into six land use/cover classes: built-up, agriculture, herbaceous, open pine forest, mixed forest, and barren land using unsupervised ISODATA and maximum likelihood supervised classifiers with the aid of field collected ground truth data collected in the field. Ground truth information, collected in the field in December 2007, and including equalized stratified random points which were visual interpreted were used to assess the accuracy of the classification results. The overall accuracy of the land classification for each image was respectively: 1987 (82%), 2000 (82%), 2004 (87%). A post classification change detection technique was used to produce change images for 1987 to 2000, 1987 to 2004, and 2000 to 2004. It was found that significant changes in the land use/cover occurred over the 17- year period. The results showed increases in built up (from 10% to 17%) and herbaceous (from 5% to 14%) areas between 1987 and 2004. The increase of herbaceous was mostly caused by the abandonment of exhausted agriculture lands. At the same time, open pine forest and mixed forest areas lost (75%) and (83%) of their area to other land use/cover types. Open pine forest (from 20% to 14%) and mixed forest (from18 to 12%) were transformed into agriculture area or barren land. This study illustrated the continuing deforestation, land degradation and soil erosion in the region, which in turn is leading to decrease in vegetative cover. The study also showed the importance of Remote Sensing (RS) and Geographic Information System (GIS) technologies to estimate timely changes in the land use/cover, and to evaluate their causes in order to design an ecological based management plan for the park.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

he notion of outsourcing – making arrangements with an external entity for the provision of goods or services to supplement or replace internal efforts – has been around for centuries. The outsourcing of information systems (IS) is however a much newer concept but one which has been growing dramatically. This book attempts to synthesize what is known about IS outsourcing by dividing the subject into three interrelated parts: (1) Traditional Information Technology Outsourcing, (2) Information Technolgy Offshoring, and (3) Business Process Outsourcing. The book should be of interest to all academics and students in the field of Information Systems as well as corporate executives and professionals who seek a more profound analysis and understanding of the underlying factors and mechanisms of outsourcing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: The most effective decision support systems are integrated with clinical information systems, such as inpatient and outpatient electronic health records (EHRs) and computerized provider order entry (CPOE) systems. Purpose The goal of this project was to describe and quantify the results of a study of decision support capabilities in Certification Commission for Health Information Technology (CCHIT) certified electronic health record systems. METHODS: The authors conducted a series of interviews with representatives of nine commercially available clinical information systems, evaluating their capabilities against 42 different clinical decision support features. RESULTS: Six of the nine reviewed systems offered all the applicable event-driven, action-oriented, real-time clinical decision support triggers required for initiating clinical decision support interventions. Five of the nine systems could access all the patient-specific data items identified as necessary. Six of the nine systems supported all the intervention types identified as necessary to allow clinical information systems to tailor their interventions based on the severity of the clinical situation and the user's workflow. Only one system supported all the offered choices identified as key to allowing physicians to take action directly from within the alert. Discussion The principal finding relates to system-by-system variability. The best system in our analysis had only a single missing feature (from 42 total) while the worst had eighteen.This dramatic variability in CDS capability among commercially available systems was unexpected and is a cause for concern. CONCLUSIONS: These findings have implications for four distinct constituencies: purchasers of clinical information systems, developers of clinical decision support, vendors of clinical information systems and certification bodies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud Computing is an enabler for delivering large-scale, distributed enterprise applications with strict requirements in terms of performance. It is often the case that such applications have complex scaling and Service Level Agreement (SLA) management requirements. In this paper we present a simulation approach for validating and comparing SLA-aware scaling policies using the CloudSim simulator, using data from an actual Distributed Enterprise Information System (dEIS). We extend CloudSim with concurrent and multi-tenant task simulation capabilities. We then show how different scaling policies can be used for simulating multiple dEIS applications. We present multiple experiments depicting the impact of VM scaling on both datacenter energy consumption and dEIS performance indicators.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose. To examine the association between living in proximity to Toxics Release Inventory (TRI) facilities and the incidence of childhood cancer in the State of Texas. ^ Design. This is a secondary data analysis utilizing the publicly available Toxics release inventory (TRI), maintained by the U.S. Environmental protection agency that lists the facilities that release any of the 650 TRI chemicals. Total childhood cancer cases and childhood cancer rate (age 0-14 years) by county, for the years 1995-2003 were used from the Texas cancer registry, available at the Texas department of State Health Services website. Setting: This study was limited to the children population of the State of Texas. ^ Method. Analysis was done using Stata version 9 and SPSS version 15.0. Satscan was used for geographical spatial clustering of childhood cancer cases based on county centroids using the Poisson clustering algorithm which adjusts for population density. Pictorial maps were created using MapInfo professional version 8.0. ^ Results. One hundred and twenty five counties had no TRI facilities in their region, while 129 facilities had at least one TRI facility. An increasing trend for number of facilities and total disposal was observed except for the highest category based on cancer rate quartiles. Linear regression analysis using log transformation for number of facilities and total disposal in predicting cancer rates was computed, however both these variables were not found to be significant predictors. Seven significant geographical spatial clusters of counties for high childhood cancer rates (p<0.05) were indicated. Binomial logistic regression by categorizing the cancer rate in to two groups (<=150 and >150) indicated an odds ratio of 1.58 (CI 1.127, 2.222) for the natural log of number of facilities. ^ Conclusion. We have used a unique methodology by combining GIS and spatial clustering techniques with existing statistical approaches in examining the association between living in proximity to TRI facilities and the incidence of childhood cancer in the State of Texas. Although a concrete association was not indicated, further studies are required examining specific TRI chemicals. Use of this information can enable the researchers and public to identify potential concerns, gain a better understanding of potential risks, and work with industry and government to reduce toxic chemical use, disposal or other releases and the risks associated with them. TRI data, in conjunction with other information, can be used as a starting point in evaluating exposures and risks. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The three articles that comprise this dissertation describe how small area estimation and geographic information systems (GIS) technologies can be integrated to provide useful information about the number of uninsured and where they are located. Comprehensive data about the numbers and characteristics of the uninsured are typically only available from surveys. Utilization and administrative data are poor proxies from which to develop this information. Those who cannot access services are unlikely to be fully captured, either by health care provider utilization data or by state and local administrative data. In the absence of direct measures, a well-developed estimation of the local uninsured count or rate can prove valuable when assessing the unmet health service needs of this population. However, the fact that these are “estimates” increases the chances that results will be rejected or, at best, treated with suspicion. The visual impact and spatial analysis capabilities afforded by geographic information systems (GIS) technology can strengthen the likelihood of acceptance of area estimates by those most likely to benefit from the information, including health planners and policy makers. ^ The first article describes how uninsured estimates are currently being performed in the Houston metropolitan region. It details the synthetic model used to calculate numbers and percentages of uninsured, and how the resulting estimates are integrated into a GIS. The second article compares the estimation method of the first article with one currently used by the Texas State Data Center to estimate numbers of uninsured for all Texas counties. Estimates are developed for census tracts in Harris County, using both models with the same data sets. The results are statistically compared. The third article describes a new, revised synthetic method that is being tested to provide uninsured estimates at sub-county levels for eight counties in the Houston metropolitan area. It is being designed to replicate the same categorical results provided by a current U.S. Census Bureau estimation method. The estimates calculated by this revised model are compared to the most recent U.S. Census Bureau estimates, using the same areas and population categories. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To reach the goals established by the Institute of Medicine (IOM) and the Centers for Disease Control's (CDC) STOP TB USA, measures must be taken to curtail a future peak in Tuberculosis (TB) incidence and speed the currently stagnant rate of TB elimination. Both efforts will require, at minimum, the consideration and understanding of the third dimension of TB transmission: the location-based spread of an airborne pathogen among persons known and unknown to each other. This consideration will require an elucidation of the areas within the U.S. that have endemic TB. The Houston Tuberculosis Initiative (HTI) was a population-based active surveillance of confirmed Houston/Harris County TB cases from 1995–2004. Strengths in this dataset include the molecular characterization of laboratory confirmed cases, the collection of geographic locations (including home addresses) frequented by cases, and the HTI time period that parallels a decline in TB incidence in the United States (U.S.). The HTI dataset was used in this secondary data analysis to implement a GIS analysis of TB cases, the locations frequented by cases, and their association with risk factors associated with TB transmission. ^ This study reports, for the first time, the incidence of TB among the homeless in Houston, Texas. The homeless are an at-risk population for TB disease, yet they are also a population whose TB incidence has been unknown and unreported due to their non-enumeration. The first section of this dissertation identifies local areas in Houston with endemic TB disease. Many Houston TB cases who reported living in these endemic areas also share the TB risk factor of current or recent homelessness. Merging the 2004–2005 Houston enumeration of the homeless with historical HTI surveillance data of TB cases in Houston enabled this first-time report of TB risk among the homeless in Houston. The homeless were more likely to be US-born, belong to a genotypic cluster, and belong to a cluster of a larger size. The calculated average incidence among homeless persons was 411/100,000, compared to 9.5/100,000 among housed. These alarming rates are not driven by a co-infection but by social determinants. The unsheltered persons were hospitalized more days and required more follow-up time by staff than those who reported a steady housing situation. The homeless are a specific example of the increased targeting of prevention dollars that could occur if TB rates were reported for specific areas with known health disparities rather than as a generalized rate normalized over a diverse population. ^ It has been estimated that 27% of Houstonians use public transportation. The city layout allows bus routes to run like veins connecting even the most diverse of populations within the metropolitan area. Secondary data analysis of frequent bus use (defined as riding a route weekly) among TB cases was assessed for its relationship with known TB risk factors. The spatial distribution of genotypic clusters associated with bus use was assessed, along with the reported routes and epidemiologic-links among cases belonging to the identified clusters. ^ TB cases who reported frequent bus use were more likely to have demographic and social risk factors associated with poverty, immune suppression and health disparities. An equal proportion of bus riders and non-bus riders were cultured for Mycobacterium tuberculosis, yet 75% of bus riders were genotypically clustered, indicating recent transmission, compared to 56% of non-bus riders (OR=2.4, 95%CI(2.0, 2.8), p<0.001). Bus riders had a mean cluster size of 50.14 vs. 28.9 (p<0.001). Second order spatial analysis of clustered fingerprint 2 (n=122), a Beijing family cluster, revealed geographic clustering among cases based on their report of bus use. Univariate and multivariate analysis of routes reported by cases belonging to these clusters found that 10 of the 14 clusters were associated with use. Individual Metro routes, including one route servicing the local hospitals, were found to be risk factors for belonging to a cluster shown to be endemic in Houston. The routes themselves geographically connect the census tracts previously identified as having endemic TB. 78% (15/23) of Houston Metro routes investigated had one or more print groups reporting frequent use for every HTI study year. We present data on three specific but clonally related print groups and show that bus-use is clustered in time by route and is the only known link between cases in one of the three prints: print 22. (Abstract shortened by UMI.)^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Geographic Information Systems are developed to handle enormous volumes of data and are equipped with numerous functionalities intended to capture, store, edit, organise, process and analyse or represent the geographically referenced information. On the other hand, industrial simulators for driver training are real-time applications that require a virtual environment, either geospecific, geogeneric or a combination of the two, over which the simulation programs will be run. In the final instance, this environment constitutes a geographic location with its specific characteristics of geometry, appearance, functionality, topography, etc. The set of elements that enables the virtual simulation environment to be created and in which the simulator user can move, is usually called the Visual Database (VDB). The main idea behind the work being developed approaches a topic that is of major interest in the field of industrial training simulators, which is the problem of analysing, structuring and describing the virtual environments to be used in large driving simulators. This paper sets out a methodology that uses the capabilities and benefits of Geographic Information Systems for organising, optimising and managing the visual Database of the simulator and for generally enhancing the quality and performance of the simulator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to present a simulation‐based evaluation method for the comparison of different organizational forms and software support levels in the field of supply chain management (SCM). Design/methodology/approach – Apart from widely known logistic performance indicators, the discrete event simulation model considers explicitly coordination cost as stemming from iterative administration procedures. Findings - The method is applied to an exemplary supply chain configuration considering various parameter settings. Curiously, additional coordination cost does not always result in improved logistic performance. Influence factor variations lead to different organizational recommendations. The results confirm the high importance of (up to now) disregarded dimensions when evaluating SCM concepts and IT tools. Research limitations/implications – The model is based on simplified product and network structures. Future research shall include more complex, real world configurations. Practical implications – The developed method is designed for the identification of improvement potential when SCM software is employed. Coordination schemes based only on ERP systems are valid alternatives in industrial practice because significant investment IT can be avoided. Therefore, the evaluation of these coordination procedures, in particular the cost due to iterations, is of high managerial interest and the method provides a comprehensive tool for strategic IT decision making. Originality/value – Reviewed literature is mostly focused on the benefits of SCM software implementations. However, ERP system based supply chain coordination is still widespread industrial practice but associated coordination cost has not been addressed by researchers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays, devices that monitor the health of structures consume a lot of power and need a lot of time to acquire, process, and send the information about the structure to the main processing unit. To decrease this time, fast electronic devices are starting to be used to accelerate this processing. In this paper some hardware algorithms implemented in an electronic logic programming device are described. The goal of this implementation is accelerate the process and diminish the information that has to be send. By reaching this goal, the time the processor needs for treating all the information is reduced and so the power consumption is reduced too.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to analyse Information Systems outsourcing success, measuring the latter according to the satisfaction level achieved by users and taking into account three success factors: the role played by the client firm’s top management; the relationships between client and provider; and the degree of outsourcing. Design/methodology/approach – A survey was carried out by means of a questionnaire answered by 398 large Spanish firms. Its results were examined using the partial least squares software and through the proposal of a structural equation model. Findings – The conclusions reveal that the perceived benefits play a mediating role in outsourcing satisfaction and also that these benefits can be grouped together into three categories: strategic; economic; and technological ones. Originality/value – The study identifies how some success factors will be more influent than others depending which type of benefits are ultimately sought with outsourcing.