923 resultados para acquisition of data system
Resumo:
Felsic microgranular enclaves with structures indicating that they interacted in a plastic state with their chemically similar host granite are abundant in the Maua Pluton, SE Brazil. Larger plagioclase xenocrysts are in textural disequilibrium with the enclave groundmass and show complex zoning patterns with partially resorbed An-rich cores (locally with patchy textures) surrounded by more sodic rims. In situ laser ablation-(multi-collector) inductively coupled plasma mass spectrometry trace element and Sr isotopic analyses performed on the plagioclase xenocrysts indicate open-system crystallization; however, no evidence of derivation from more primitive basic melts is observed. The An-rich cores have more radiogenic initial Sr isotopic ratios that decrease towards the outermost part of the rims, which are in isotopic equilibrium with the matrix plagioclase. These profiles may have been produced by either (1) diffusional re-equilibration after rim crystallization from the enclave-forming magma, as indicated by relatively short calculated residence times, or (2) episodic contamination with a decrease of the contaminant ratio proportional to the extent to which the country rocks were isolated by the crystallization front. Profiles of trace elements with high diffusion coefficients would require unrealistically long residence times, and can be modeled in terms of fractional crystallization. A combination of trace element and Sr isotope data suggests that the felsic microgranular enclaves from the Maua Pluton are the products of interaction between end-member magmas that had similar compositions, thus recording `self-mixing` events.
Resumo:
In this paper, we show that the steady-state free precession sequence can be used to acquire (13)C high-resolution nuclear magnetic resonance spectra and applied to qualitative analysis. The analysis of brucine sample using this sequence with 60 degrees flip angle and time interval between pulses equal to 300 ms (acquisition time, 299.7 ms; recycle delay, 300 ms) resulted in spectrum with twofold enhancement in signal-to-noise ratio, when compared to standard (13)C sequence. This gain was better when a much shorter time interval between pulses (100 ms) was applied. The result obtained was more than fivefold enhancement in signal-to-noise ratio, equivalent to more than 20-fold reduction in total data recording time. However, this short time interval between pulses produces a spectrum with severe phase and truncation anomalies. We demonstrated that these anomalies can be minimized by applying an appropriate apodization function and plotting the spectrum in the magnitude mode.
Resumo:
The main objective for this degree project is to implement an Application Availability Monitoring (AAM) system named Softek EnView for Fujitsu Services. The aim of implementing the AAM system is to proactively identify end user performance problems, such as application and site performance, before the actual end users experience them. No matter how well applications and sites are designed and nomatter how well they meet business requirements, they are useless to the end users if the performance is slow and/or unreliable. It is important for the customers to find out whether the end user problems are caused by the network or application malfunction. The Softek EnView was comprised of the following EnView components: Robot, Monitor, Reporter, Collector and Repository. The implemented system, however, is designed to use only some of these EnView elements: Robot, Reporter and depository. Robots can be placed at any key user location and are dedicated to customers, which means that when the number of customers increases, at the sametime the amount of Robots will increase. To make the AAM system ideal for the company to use, it was integrated with Fujitsu Services’ centralised monitoring system, BMC PATROL Enterprise Manager (PEM). That was actually the reason for deciding to drop the EnView Monitor element. After the system was fully implemented, the AAM system was ready for production. Transactions were (and are) written and deployed on Robots to simulate typical end user actions. These transactions are configured to run with certain intervals, which are defined collectively with customers. While they are driven against customers’ applicationsautomatically, transactions collect availability data and response time data all the time. In case of a failure in transactions, the robot immediately quits the transactionand writes detailed information to a log file about what went wrong and which element failed while going through an application. Then an alert is generated by a BMC PATROL Agent based on this data and is sent to the BMC PEM. Fujitsu Services’ monitoring room receives the alert, reacts to it according to the incident management process in ITIL and by alerting system specialists on critical incidents to resolve problems. As a result of the data gathered by the Robots, weekly reports, which contain detailed statistics and trend analyses of ongoing quality of IT services, is provided for the Customers.
Resumo:
The main idea of this research to solve the problem of inventory management for the paper industry SPM PVT limited. The aim of this research was to find a methodology by which the inventory of raw material could be kept at minimum level by means of buffer stock level.The main objective then lies in finding the minimum level of buffer stock according to daily consumption of raw material, finding the Economic Order Quantity (EOQ) reorders point and how much order will be placed in a year to control the shortage of raw material.In this project, we discuss continuous review model (Deterministic EOQ models) that includes the probabilistic demand directly in the formulation. According to the formula, we see the reorder point and the order up to model. The problem was tackled mathematically as well as simulation modeling was used where mathematically tractable solution was not possible.The simulation modeling was done by Awesim software for developing the simulation network. This simulation network has the ability to predict the buffer stock level based on variable consumption of raw material and lead-time. The data collection for this simulation network is taken from the industrial engineering personnel and the departmental studies of the concerned factory. At the end, we find the optimum level of order quantity, reorder point and order days.
Resumo:
Determining the provenance of data, i.e. the process that led to that data, is vital in many disciplines. For example, in science, the process that produced a given result must be demonstrably rigorous for the result to be deemed reliable. A provenance system supports applications in recording adequate documentation about process executions to answer queries regarding provenance, and provides functionality to perform those queries. Several provenance systems are being developed, but all focus on systems in which the components are textitreactive, for example Web Services that act on the basis of a request, job submission system, etc. This limitation means that questions regarding the motives of autonomous actors, or textitagents, in such systems remain unanswerable in the general case. Such questions include: who was ultimately responsible for a given effect, what was their reason for initiating the process and does the effect of a process match what was intended to occur by those initiating the process? In this paper, we address this limitation by integrating two solutions: a generic, re-usable framework for representing the provenance of data in service-oriented architectures and a model for describing the goal-oriented delegation and engagement of agents in multi-agent systems. Using these solutions, we present algorithms to answer common questions regarding responsibility and success of a process and evaluate the approach with a simulated healthcare example.
Resumo:
Purpose: This study evaluated the bond strength of two etch-and-rinse adhesive systems (two- and three-step) and a self-etching system to Coronal and root canal dentin.Materials and Methods: The root canals of 30 human incisors and canines were instrumented and prepared with burs. The posts used for luting were duplicated with dual resin cement (Duo-link) inside Aestheti Plus #2 molds. Thus, three groups were formed (n = 10) according to the adhesive system employed: All-Bond 2 (TE3) + resin cement post (rcp) + Duo-link (DI); One-Step Plus (TE2) + rcp + DI; Tyrian/One-Step Plus (SE) + rcp + DI. Afterwards, 8 transverse sections (1.5 mm) were cut from 4 mm above the CEJ up to 4 mm short of the root canal apex, comprising coronal and root canal dentin. The sections were submitted to push-out testing in a universal testing machine EMIC (1 mm/min). Bond strength data were analyzed with two-way repeated measures ANOVA and Tukey's test (p < 0.05).Results: The relationship between the adhesives was not the same in the different regions (p < 0.05). Comparison of the means achieved with the adhesives in each region (Tukey; p < 0.05) revealed that TE3 (mean standard deviation: 5.22 +/- 1.70) was higher than TE2 (2.60 +/- 1.74) and SE (1.68 +/- 1.85).Conclusion: Under the experimental conditions, better bonding to dentin was achieved using the three-step etch-and-rinse system, especially in the coronal region. Therefore, the traditional etch-and-rinse three-step adhesive system seems to be the best choice for teeth needing adhesive endodontic restorations.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
We search for decays of Kaluza-Klein excitations of the graviton in the Randall-Sundrum model of extra dimensions to e+e- and in 1 fb-1 of pp collisions at s=1.96 TeV collected by the D0 detector at the Fermilab Tevatron. We set 95% confidence level upper limits on the production cross section times branching fraction, which translate into lower limits on the mass of the lightest excitation between 300 and 900 GeV for values of the coupling k/M Pl between 0.01 and 0.1. © 2008 The American Physical Society.
Resumo:
Background: Neuropsychiatric symptoms (NPS) affect almost all patients with dementia and are a major focus of study and treatment. Accurate assessment of NPS through valid, sensitive and reliable measures is crucial. Although current NPS measures have many strengths, they also have some limitations (e.g. acquisition of data is limited to informants or caregivers as respondents, limited depth of items specific to moderate dementia). Therefore, we developed a revised version of the NPI, known as the NPI-C. The NPI-C includes expanded domains and items, and a clinician-rating methodology. This study evaluated the reliability and convergent validity of the NPI-C at ten international sites (seven languages). Methods: Face validity for 78 new items was obtained through a Delphi panel. A total of 128 dyads (caregivers/patients) from three severity categories of dementia (mild = 58, moderate = 49, severe = 21) were interviewed separately by two trained raters using two rating methods: the original NPI interview and a clinician-rated method. Rater 1 also administered four additional, established measures: the Apathy Evaluation Scale, the Brief Psychiatric Rating Scale, the Cohen-Mansfield Agitation Index, and the Cornell Scale for Depression in Dementia. Intraclass correlations were used to determine inter-rater reliability. Pearson correlations between the four relevant NPI-C domains and their corresponding outside measures were used for convergent validity. Results: Inter-rater reliability was strong for most items. Convergent validity was moderate (apathy and agitation) to strong (hallucinations and delusions; agitation and aberrant vocalization; and depression) for clinician ratings in NPI-C domains. Conclusion: Overall, the NPI-C shows promise as a versatile tool which can accurately measure NPS and which uses a uniform scale system to facilitate data comparisons across studies. Copyright © 2010 International Psychogeriatric Association.
Resumo:
Includes bibliography
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Spinal cord injury (SCI) results not only in paralysis; but it is also associated with a range of autonomic dysregulation that can interfere with cardiovascular, bladder, bowel, temperature, and sexual function. The entity of the autonomic dysfunction is related to the level and severity of injury to descending autonomic (sympathetic) pathways. For many years there was limited awareness of these issues and the attention given to them by the scientific and medical community was scarce. Yet, even if a new system to document the impact of SCI on autonomic function has recently been proposed, the current standard of assessment of SCI (American Spinal Injury Association (ASIA) examination) evaluates motor and sensory pathways, but not severity of injury to autonomic pathways. Beside the severe impact on quality of life, autonomic dysfunction in persons with SCI is associated with increased risk of cardiovascular disease and mortality. Therefore, obtaining information regarding autonomic function in persons with SCI is pivotal and clinical examinations and laboratory evaluations to detect the presence of autonomic dysfunction and quantitate its severity are mandatory. Furthermore, previous studies demonstrated that there is an intimate relationship between the autonomic nervous system and sleep from anatomical, physiological, and neurochemical points of view. Although, even if previous epidemiological studies demonstrated that sleep problems are common in spinal cord injury (SCI), so far only limited polysomnographic (PSG) data are available. Finally, until now, circadian and state dependent autonomic regulation of blood pressure (BP), heart rate (HR) and body core temperature (BcT) were never assessed in SCI patients. Aim of the current study was to establish the association between the autonomic control of the cardiovascular function and thermoregulation, sleep parameters and increased cardiovascular risk in SCI patients.
Resumo:
The rapid development in the field of lighting and illumination allows low energy consumption and a rapid growth in the use, and development of solid-state sources. As the efficiency of these devices increases and their cost decreases there are predictions that they will become the dominant source for general illumination in the short term. The objective of this thesis is to study, through extensive simulations in realistic scenarios, the feasibility and exploitation of visible light communication (VLC) for vehicular ad hoc networks (VANETs) applications. A brief introduction will introduce the new scenario of smart cities in which visible light communication will become a fundamental enabling technology for the future communication systems. Specifically, this thesis focus on the acquisition of several, frequent, and small data packets from vehicles, exploited as sensors of the environment. The use of vehicles as sensors is a new paradigm to enable an efficient environment monitoring and an improved traffic management. In most cases, the sensed information must be collected at a remote control centre and one of the most challenging aspects is the uplink acquisition of data from vehicles. My thesis discusses the opportunity to take advantage of short range vehicle-to-vehicle (V2V) and vehicle-to-roadside (V2R) communications to offload the cellular networks. More specifically, it discusses the system design and assesses the obtainable cellular resource saving, by considering the impact of the percentage of vehicles equipped with short range communication devices, of the number of deployed road side units, and of the adopted routing protocol. When short range communications are concerned, WAVE/IEEE 802.11p is considered as standard for VANETs. Its use together with VLC will be considered in urban vehicular scenarios to let vehicles communicate without involving the cellular network. The study is conducted by simulation, considering both a simulation platform (SHINE, simulation platform for heterogeneous interworking networks) developed within the Wireless communication Laboratory (Wilab) of the University of Bologna and CNR, and network simulator (NS3). trying to realistically represent all the wireless network communication aspects. Specifically, simulation of vehicular system was performed and introduced in ns-3, creating a new module for the simulator. This module will help to study VLC applications in VANETs. Final observations would enhance and encourage potential research in the area and optimize performance of VLC systems applications in the future.
Resumo:
Fine powders commonly have poor flowability and dispersibility due to interparticle adhesion that leads to formation of agglomerates. Knowing about adhesion in particle collectives is indispensable to gain a deeper fundamental understanding of particle behavior in powders. Especially in pharmaceutical industry a control of adhesion forces in powders is mandatory to improve the performance of inhalation products. Typically the size of inhalable particles is in the range of 1 - 5 µm. In this thesis, a new method was developed to measure adhesion forces of particles as an alternative to the established colloidal probe and centrifuge technique, which are both experimentally demanding, time consuming and of limited practical applicability. The new method is based on detachment of individual particles from a surface due to their inertia. The required acceleration in the order of 500 000 g is provided by a Hopkinson bar shock excitation system and measured via laser vibrometry. Particle detachment events are detected on-line by optical video microscopy. Subsequent automated data evaluation allows obtaining a statistical distribution of particle adhesion forces. To validate the new method, adhesion forces for ensembles of single polystyrene and silica microspheres on a polystyrene coated steel surface were measured under ambient conditions. It was possible to investigate more than 150 individual particles in one experiment and obtain adhesion values of particles in a diameter range of 3 - 13 µm. This enables a statistical evaluation while measuring effort and time are considerably lower compared to the established techniques. Measured adhesion forces of smaller particles agreed well with values from colloidal probe measurements and theoretical predictions. However, for the larger particles a stronger increase of adhesion with diameter was observed. This discrepancy might be induced by surface roughness and heterogeneity that influence small and large particles differently. By measuring adhesion forces of corrugated dextran particles with sizes down to 2 µm it was demonstrated that the Hopkinson bar method can be used to characterize more complex sample systems as well. Thus, the new device will be applicable to study a broad variety of different particle-surface combinations on a routine basis, including strongly cohesive powders like pharmaceutical drugs for inhalation.
Resumo:
File system security is fundamental to the security of UNIX and Linux systems since in these systems almost everything is in the form of a file. To protect the system files and other sensitive user files from unauthorized accesses, certain security schemes are chosen and used by different organizations in their computer systems. A file system security model provides a formal description of a protection system. Each security model is associated with specified security policies which focus on one or more of the security principles: confidentiality, integrity and availability. The security policy is not only about “who” can access an object, but also about “how” a subject can access an object. To enforce the security policies, each access request is checked against the specified policies to decide whether it is allowed or rejected. The current protection schemes in UNIX/Linux systems focus on the access control. Besides the basic access control scheme of the system itself, which includes permission bits, setuid and seteuid mechanism and the root, there are other protection models, such as Capabilities, Domain Type Enforcement (DTE) and Role-Based Access Control (RBAC), supported and used in certain organizations. These models protect the confidentiality of the data directly. The integrity of the data is protected indirectly by only allowing trusted users to operate on the objects. The access control decisions of these models depend on either the identity of the user or the attributes of the process the user can execute, and the attributes of the objects. Adoption of these sophisticated models has been slow; this is likely due to the enormous complexity of specifying controls over a large file system and the need for system administrators to learn a new paradigm for file protection. We propose a new security model: file system firewall. It is an adoption of the familiar network firewall protection model, used to control the data that flows between networked computers, toward file system protection. This model can support decisions of access control based on any system generated attributes about the access requests, e.g., time of day. The access control decisions are not on one entity, such as the account in traditional discretionary access control or the domain name in DTE. In file system firewall, the access decisions are made upon situations on multiple entities. A situation is programmable with predicates on the attributes of subject, object and the system. File system firewall specifies the appropriate actions on these situations. We implemented the prototype of file system firewall on SUSE Linux. Preliminary results of performance tests on the prototype indicate that the runtime overhead is acceptable. We compared file system firewall with TE in SELinux to show that firewall model can accommodate many other access control models. Finally, we show the ease of use of firewall model. When firewall system is restricted to specified part of the system, all the other resources are not affected. This enables a relatively smooth adoption. This fact and that it is a familiar model to system administrators will facilitate adoption and correct use. The user study we conducted on traditional UNIX access control, SELinux and file system firewall confirmed that. The beginner users found it easier to use and faster to learn then traditional UNIX access control scheme and SELinux.