947 resultados para lightning location system
Resumo:
The histological grading of cervical intraepithelial neoplasia (CIN) remains subjective, resulting in inter- and intra-observer variation and poor reproducibility in the grading of cervical lesions. This study has attempted to develop an objective grading system using automated machine vision. The architectural features of cervical squamous epithelium are quantitatively analysed using a combination of computerized digital image processing and Delaunay triangulation analysis; 230 images digitally captured from cases previously classified by a gynaecological pathologist included normal cervical squamous epithelium (n = 30), koilocytosis (n = 46), CIN 1 (n = 52), CIN 2 (n = 56), and CIN 3 (n=46). Intra- and inter-observer variation had kappa values of 0.502 and 0.415, respectively. A machine vision system was developed in KS400 macro programming language to segment and mark the centres of all nuclei within the epithelium. By object-oriented analysis of image components, the positional information of nuclei was used to construct a Delaunay triangulation mesh. Each mesh was analysed to compute triangle dimensions including the mean triangle area, the mean triangle edge length, and the number of triangles per unit area, giving an individual quantitative profile of measurements for each case. Discriminant analysis of the geometric data revealed the significant discriminatory variables from which a classification score was derived. The scoring system distinguished between normal and CIN 3 in 98.7% of cases and between koilocytosis and CIN 1 in 76.5% of cases, but only 62.3% of the CIN cases were classified into the correct group, with the CIN 2 group showing the highest rate of misclassification. Graphical plots of triangulation data demonstrated the continuum of morphological change from normal squamous epithelium to the highest grade of CIN, with overlapping of the groups originally defined by the pathologists. This study shows that automated location of nuclei in cervical biopsies using computerized image analysis is possible. Analysis of positional information enables quantitative evaluation of architectural features in CIN using Delaunay triangulation meshes, which is effective in the objective classification of CIN. This demonstrates the future potential of automated machine vision systems in diagnostic histopathology. Copyright (C) 2000 John Wiley and Sons, Ltd.
Effects of Charge Location on the Absorptions and Lifetimes of Protonated Tyrosine Peptides in Vacuo
Resumo:
Nearby charges affect the electronic energy levels of chromophores, with the extent of the effect being determined by the magnitude of the charge and degree of charge-chromophore separation. The molecular configuration dictates the charge chromophore distance. Hence, in this study, we aim to assess how the location of the charge influences the absorption of a set of model protonated and diprotonated peptide ions, and whether spectral differences are large enough to be identified. The studied ions were the dipeptide YK, the tripeptide KYK (Y = tyrosine; K = lysine) and their complexes with 18-crown-6-ether (CE). The CE targets the ammonium group by forming internal ionic hydrogen bonds and limits the folding of the peptide. In the tripeptide, the distance between the chromophore and the backbone ammonium is enlarged relative to that in the dipeptide. Experiments were performed in an electrostatic ion storage ring using a tunable laser system, and action spectra based on lifetime measurements were obtained in the range from 210 to 310 nm. The spectra are all quite similar though there seems to be some changes in the absorption band between 210 and 250 nm, while in the lower energy band all ions had a maximum absorption at similar to 275 nm. Lifetimes after photoexcitation were found to shorten upon protonation and lengthen upon CE complexation, in accordance with the increased number of degrees of freedom and an increase in activation energies for dissociation as the mobile proton model is no longer operative.
Resumo:
A reduction in the time required to locate and restore faults on a utility's distribution network improves the customer minutes lost (CML) measurement and hence brings direct cost savings to the operating company. The traditional approach to fault location involves fault impedance determination from high volume waveform files dispatched across a communications channel to a central location for processing and analysis. This paper examines an alternative scheme where data processing is undertaken locally within a recording instrument thus reducing the volume of data to be transmitted. Processed event fault reports may be emailed to relevant operational staff for the timely repair and restoration of the line.
Resumo:
This paper presents a physics based modelling procedure to predict the thermal damage of composite material when struck by lightning. The procedure uses the Finite Element Method with non-linear material models to represent the extreme thermal material behaviour of the composite material (carbon/epoxy) and an embedded copper mesh protection system. Simulation predictions are compared against published experimental data, illustrating the potential accuracy and computational cost of virtual lightning strike tests and the requirement for temperature dependent material modelling. The modelling procedure is then used to examine and explain a number of practical solutions to minimize thermal material damage. © 2013 Elsevier Ltd.
Resumo:
This paper presents the results of an experimental investigation, carried out in order to verify the feasibility of a ‘drive-by’ approach which uses a vehicle instrumented with accelerometers to detect and locate damage in a bridge. In theoretical simulations, a simplified vehicle-bridge interaction model is used to investigate the effectiveness of the approach in detecting damage in a bridge from vehicle accelerations. For this purpose, the accelerations are processed using a continuous wavelet transform and damage indicators are evaluated and compared. Alternative statistical pattern recognition techniques are incorporated to allow for repeated vehicle passes. Parameters such as vehicle speed, damage level, location and road roughness are varied in simulations to investigate the effect. A scaled laboratory experiment is carried out to assess the effectiveness of the approach in a more realistic environment, considering a number of bridge damage scenarios.
Resumo:
Many of the bridges currently in use worldwide are approaching the end of their design lives. However, rehabilitating and extending the lives of these structures raises important safety issues. There is also a need for increased monitoring which has considerable cost implications for bridge management systems. Existing structural health monitoring (SHM) techniques include vibration-based approaches which typically involve direct instrumentation of the bridge and are important as they can indicate the deterioration of the bridge condition. However, they can be labour intensive and expensive. In the past decade, alternative indirect vibration-based approaches which utilise the response of a vehicle passing over a bridge have been developed. This paper investigates such an approach; a low-cost approach for the monitoring of bridge structures which consists of the use of a vehicle fitted with accelerometers on its axles. The approach aims to detect damage in the bridge while obviating the need for direct instrumentation of the bridge. Here, the effectiveness of the approach in detecting damage in a bridge is investigated using a simplified vehicle-bridge interaction (VBI) model in theoretical simulations and a scaled VBI model in a laboratory experiment. In order to identify the existence and location of damage, the vehicle accelerations are recorded and processed using a continuous Morlet wavelet transform and a damage index is established. A parametric study is carried out to investigate the effect of parameters such as the bridge span length, vehicle speed, vehicle mass, damage level and road surface roughness on the accuracy of results.
Resumo:
This paper investigates a low-cost wavelet-based approach for the preliminary monitoring of bridge structures, consisting of the use of a vehicle fitted with accelerometers on its axles. The approach aims to reduce the need for direct instrumentation of the bridge. A time-frequency analysis is carried out in order to identify the existence and location of damage from vehicle accelerations. Firstly, in theoretical simulations, a simplified vehicle-bridge interaction model is used to investigate the effectiveness of the approach. A number of damage indicators are evaluated and compared. A range of parameters such as the bridge span, vehicle speed, damage level and location, signal noise and road roughness are varied in simulations. Secondly, a scaled laboratory experiment is carried out to validate the results of the theoretical analysis and assess the ability of the selected damage indicators to detect changes in the bridge response from vehicle accelerations.
Resumo:
Run Off Road (ROR) crashes are road accidents that often result in severe injuries or fatalities. To reduce the severity of ROR crashes, “forgiving roadsides” need to be designed and this includes identifying situations where there is a need for a Vehicle Restraint System (VRS) and what appropriate VRS should be selected for a specific location and traffic condition. Whilst there are standards covering testing, evaluation and classification of VRS within Europe (EN1317 parts 1 to 8), their selection, location and installation requirements are typically based upon national guidelines and standards, often produced by National Road Authorities (NRA) and/or overseeing organisations. Due to local conditions, these national guidelines vary across Europe.
The European SAVeRS project funded by CEDR has developed a practical and readily understandable VRS guidance document and a user-friendly software tool which allow designers and road administrations to select the most appropriate solution in different road and traffic conditions.
This paper describes the main outcomes of the project, the process to select the most appropriate roadside barrier, and the user friendly SAVeRS tool.
Resumo:
The use of smartphones and tablets as become almost banal in these days. Smartphones, besides serving their main purpose of making and receiving calls, come to be one of the main equipments to obtain information from the Internet, using the commonly installed browsers or through the use of dedicated applications. Furthermore, several other devices are also very frequent to the majority of the modern smartphones and tablets in the market (e.g., GPS - Global Positioning System). This devices give the current systems a very high potential of usage. One example of applicability, comes from the wish to find and navigate to events or activities which are or will soon be occurring near the user. The LifeSpeeder platform is one of the first applications in the mobile equipment market of applications which take into consideration exactly what we have just outlined, i.e., a mobile and desktop application which allows the users to locate events according with their preferences and to get help navigating to them. In this paper we briefly describe the LifeSpeeder's front and back-end. © 2014 Springer International Publishing.
Resumo:
Allergies to grass pollen are the number one cause of outdoor hay fever. The human immune system reacts with symptoms to allergens from pollen. Objective: We investigated the natural variability in release of the major group 5 allergen from grass pollen across Europe. Methods: Airborne pollen and allergens were simultaneously collected daily with a volumetric spore trap and a high-volume cascade impactor at 10 sites across Europe for 3 consecutive years. Group 5 allergen was determined with a Phl p 5 specific ELISA in two fractions of ambient air: Particulate Matter (PM) >10μm and 10μm>PM>2.5μm. Mediator release by ambient air was determined in FcεR1-humanized basophils. Origin of pollen was modeled and condensed to pollen potency maps. Results: On average grass pollen released 2.3 pg Phl p 5/pollen. Allergen release per pollen (potency) varied substantially, ranging from 0 to 9 pg Phl p 5/pollen (5 to 95% percentile). The main variation was locally day-to-day. Average potency maps across Europe varied between years. Mediator release from basophilic granulocytes correlated better with allergen/m3 (r2=0.80, p<0.001) than with pollen/m3 (r2=0.61, p<0.001). In addition, pollen released different amounts of allergen in the nonpollen bearing fraction of ambient air depending on humidity. Conclusion: Across Europe, the same amount of pollen released substantially different amounts of group 5 grass pollen allergen. This variation in allergen release is on top of variations in pollen counts. Molecular aerobiology, i.e. determining allergen in ambient air, may be a valuable addition to pollen counting.
Resumo:
A major determinant of the level of effective natural gas supply is the ease to feed customers, minimizing system total costs. The aim of this work is the study of the right number of Gas Supply Units – GSUs - and their optimal location in a gas network. This paper suggests a GSU location heuristic, based on Lagrangean relaxation techniques. The heuristic is tested on the Iberian natural gas network, a system modelized with 65 demand nodes, linked by physical and virtual pipelines. Lagrangean heuristic results along with the allocation of loads to gas sources are presented, using a 2015 forecast gas demand scenario.
Resumo:
To comply with natural gas demand growth patterns and Europe´s import dependency, the gas industry needs to organize an efficient upstream infrastructure. The best location of Gas Supply Units – GSUs and the alternative transportation mode – by phisical or virtual pipelines, are the key of a successful industry. In this work we study the optimal location of GSUs, as well as determining the most efficient allocation from gas loads to sources, selecting the best transportation mode, observing specific technical restrictions and minimizing system total costs. For the location of GSUs on system we use the P-median problem, for assigning gas demands nodes to source facilities we use the classical transportation problem. The developed model is an optimisation-based approach, based on a Lagrangean heuristic, using Lagrangean relaxation for P-median problems – Simple Lagrangean Heuristic. The solution of this heuristic can be improved by adding a local search procedure - the Lagrangean Reallocation Heuristic. These two heuristics, Simple Lagrangean and Lagrangean Reallocation, were tested on a realistic network - the primary Iberian natural gas network, organized with 65 nodes, connected by physical and virtual pipelines. Computational results are presented for both approaches, showing the location gas sources and allocation loads arrangement, system total costs and gas transportation mode.
Resumo:
Distributed generation unlike centralized electrical generation aims to generate electrical energy on small scale as near as possible to load centers, interchanging electric power with the network. This work presents a probabilistic methodology conceived to assist the electric system planning engineers in the selection of the distributed generation location, taking into account the hourly load changes or the daily load cycle. The hourly load centers, for each of the different hourly load scenarios, are calculated deterministically. These location points, properly weighted according to their load magnitude, are used to calculate the best fit probability distribution. This distribution is used to determine the maximum likelihood perimeter of the area where each source distributed generation point should preferably be located by the planning engineers. This takes into account, for example, the availability and the cost of the land lots, which are factors of special relevance in urban areas, as well as several obstacles important for the final selection of the candidates of the distributed generation points. The proposed methodology has been applied to a real case, assuming three different bivariate probability distributions: the Gaussian distribution, a bivariate version of Freund’s exponential distribution and the Weibull probability distribution. The methodology algorithm has been programmed in MATLAB. Results are presented and discussed for the application of the methodology to a realistic case and demonstrate the ability of the proposed methodology for efficiently handling the determination of the best location of the distributed generation and their corresponding distribution networks.
Resumo:
The paper presents a study on business micro-location behaviour as well as corresponding factors of influence, conducted in two metropolitan areas, Bucharest-Ilfov (Romania) and Greater Porto (Portugal). By business micro-location we refer to a specific site such as a building or facility, accommodating a business within a small, compact geographical area (e.g. metropolitan area). At this geographical scale, the macroeconomic layer factors were excluded, applicable when discern between regions or countries. The factors derived from location theory and previous empirical studies were surveyed, completing a cross-sectional analysis in order to find out the specific weights of the location factors and preferences, by region and by industry. Based on already established firms’ feedback on location, the specific weights were granted by each industry to the main location factors, types of areas, and types of accommodation facilities. The authors also suggested a model to integrate these results into a Geographical Information System (GIS).
Resumo:
Nowadays there is an increase of location-aware mobile applications. However, these applications only retrieve location with a mobile device's GPS chip. This means that in indoor or in more dense environments these applications don't work properly. To provide location information everywhere a pedestrian Inertial Navigation System (INS) is typically used, but these systems can have a large estimation error since, in order to turn the system wearable, they use low-cost and low-power sensors. In this work a pedestrian INS is proposed, where force sensors were included to combine with the accelerometer data in order to have a better detection of the stance phase of the human gait cycle, which leads to improvements in location estimation. Besides sensor fusion an information fusion architecture is proposed, based on the information from GPS and several inertial units placed on the pedestrian body, that will be used to learn the pedestrian gait behavior to correct, in real-time, the inertial sensors errors, thus improving location estimation.