45 resultados para automatic particle picking


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The large hadron collider constructed at the European organization for nuclear research, CERN, is the world’s largest single measuring instrument ever built, and also currently the most powerful particle accelerator that exists. The large hadron collider includes six different experiment stations, one of which is called the compact muon solenoid, or the CMS. The main purpose of the CMS is to track and study residue particles from proton-proton collisions. The primary detectors utilized in the CMS are resistive plate chambers (RPCs). To obtain data from these detectors, a link system has been designed. The main idea of the link system is to receive data from the detector front-end electronics in parallel form, and to transmit it onwards in serial form, via an optical fiber. The system is mostly ready and in place. However, a problem has occurred with innermost RPC detectors, located in sector labeled RE1/1; transmission lines for parallel data suffer from signal integrity issues over long distances. As a solution to this, a new version of the link system has been devised, a one that fits in smaller space and can be located within the CMS, closer to the detectors. This RE1/1 link system has been so far completed only partially, with just the mechanical design and casing being done. In this thesis, link system electronics for RE1/1 sector has been designed, by modifying the existing link system concept to better meet the requirements of the RE1/1 sector. In addition to completion of the prototype of the RE1/1 link system electronics, some testing for the system has also been done, to ensure functionality of the design.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In liberalized electricity markets, which have taken place in many countries over the world, the electricity distribution companies operate in the competitive conditions. Therefore, accurate information about the customers’ energy consumption plays an essential role for the budget keeping of the distribution company and for correct planning and operation of the distribution network. This master’s thesis is focused on the description of the possible benefits for the electric utilities and residential customers from the automatic meter reading system usage. Major benefits of the AMR, illustrated in the thesis, are distribution network management, power quality monitoring, load modelling, and detection of the illegal usage of the electricity. By the example of the power system state estimation, it was illustrated that even the partial installation of the AMR in the customer side leads to more accurate data about the voltage and power levels in the whole network. The thesis also contains the description of the present situation of the AMR integration in Russia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the last two decades of studying the Solar Energetic Particle (SEP) phenomenon, intensive emphasis has been put on how and when and where these SEPs are injected into interplanetary space. It is well known that SEPs are related to solar flares and CMEs. However, the role of each in the acceleration of SEPs has been under debate since the major role was taken from flares ascribed to CMEs step by step after the skylab mission, which started the era of CME spaceborn observations. Since then, the shock wave generated by powerful CMEs in between 2-5 solar radii is considered the major accelerator. The current paradigm interprets the prolonged proton intensity-time profile in gradual SEP events as a direct effect of accelerated SEPs by shock wave propagating in the interplanetary medium. Thus the powerful CME is thought of as a starter for the acceleration and its shock wave as a continuing accelerator to result in such an intensity-time profile. Generally it is believed that a single powerful CME which might or might not be associated with a flare is always the reason behind such gradual events.

In this work we use the Energetic and Relativistic Nucleus and Electrons ERNE instrument on board Solar and Heliospheric Observatory SOHO to present an empirical study to show the possibility of multiple accelerations in SEP events. In the beginning we found 18 double-peaked SEP events by examining 88 SEP events. The peaks in the intensity-time profile were separated by 3-24 hours. We divided the SEP events according to possible multiple acceleration into four groups and in one of these groups we find evidence for multiple acceleration in velocity dispersion and change in the abundance ratio associated at transition to the second peak. Then we explored the intensity-time profiles of all SEP events during solar cycle 23 and found that most of the SEP events are associated with multiple eruptions at the Sun and we call those events as Multi-Eruption Solar Energetic Particles (MESEP) events. We use the data available by Large Angle and Spectrometric Coronograph LASCO on board SOHO to determine the CME associated with such events and YOHKOH and GOES satellites data to determine the flare associated with such events. We found four types of MESEP according to the appearance of the peaks in the intensity-time profile in large variation of energy levels. We found that it is not possible to determine whether the peaks are related to an eruption at the Sun or not, only by examining the anisotropy flux, He/p ratio and velocity dispersion. Then we chose a rare event in which there is evidence of SEP acceleration from behind previous CME. This work resulted in a conclusion which is inconsistent with the current SEP paradigm. Then we discovered through examining another MESEP event, that energetic particles accelerated by a second CME can penetrate a previous CME-driven decelerating shock. Finally, we report the previous two MESEP events with new two events and find a common basis for second CME SEPs penetrating previous decelerating shocks. This phenomenon is reported for the first time and expected to have significant impact on modification of the current paradigm of the solar energetic particle events.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dirt counting and dirt particle characterisation of pulp samples is an important part of quality control in pulp and paper production. The need for an automatic image analysis system to consider dirt particle characterisation in various pulp samples is also very critical. However, existent image analysis systems utilise a single threshold to segment the dirt particles in different pulp samples. This limits their precision. Based on evidence, designing an automatic image analysis system that could overcome this deficiency is very useful. In this study, the developed Niblack thresholding method is proposed. The method defines the threshold based on the number of segmented particles. In addition, the Kittler thresholding is utilised. Both of these thresholding methods can determine the dirt count of the different pulp samples accurately as compared to visual inspection and the Digital Optical Measuring and Analysis System (DOMAS). In addition, the minimum resolution needed for acquiring a scanner image is defined. By considering the variation in dirt particle features, the curl shows acceptable difference to discriminate the bark and the fibre bundles in different pulp samples. Three classifiers, called k-Nearest Neighbour, Linear Discriminant Analysis and Multi-layer Perceptron are utilised to categorize the dirt particles. Linear Discriminant Analysis and Multi-layer Perceptron are the most accurate in classifying the segmented dirt particles by the Kittler thresholding with morphological processing. The result shows that the dirt particles are successfully categorized for bark and for fibre bundles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work is devoted to the development of numerical method to deal with convection diffusion dominated problem with reaction term, non - stiff chemical reaction and stiff chemical reaction. The technique is based on the unifying Eulerian - Lagrangian schemes (particle transport method) under the framework of operator splitting method. In the computational domain, the particle set is assigned to solve the convection reaction subproblem along the characteristic curves created by convective velocity. At each time step, convection, diffusion and reaction terms are solved separately by assuming that, each phenomenon occurs separately in a sequential fashion. Moreover, adaptivities and projection techniques are used to add particles in the regions of high gradients (steep fronts) and discontinuities and transfer a solution from particle set onto grid point respectively. The numerical results show that, the particle transport method has improved the solutions of CDR problems. Nevertheless, the method is time consumer when compared with other classical technique e.g., method of lines. Apart from this advantage, the particle transport method can be used to simulate problems that involve movingsteep/smooth fronts such as separation of two or more elements in the system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the modern warfare there is an active development of a new trend connected with a robotic warfare. One of the critical elements of robotics warfare systems is an automatic target recognition system, allowing to recognize objects, based on the data received from sensors. This work considers aspects of optical realization of such a system by means of NIR target scanning at fixed wavelengths. An algorithm was designed, an experimental setup was built and samples of various modern gear and apparel materials were tested. For pattern testing the samples of actively arm engaged armies camouflages were chosen. Tests were performed both in clear atmosphere and in the artificial extremely humid and hot atmosphere to simulate field conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The skill of programming is a key asset for every computer science student. Many studies have shown that this is a hard skill to learn and the outcomes of programming courses have often been substandard. Thus, a range of methods and tools have been developed to assist students’ learning processes. One of the biggest fields in computer science education is the use of visualizations as a learning aid and many visualization based tools have been developed to aid the learning process during last few decades. Studies conducted in this thesis focus on two different visualizationbased tools TRAKLA2 and ViLLE. This thesis includes results from multiple empirical studies about what kind of effects the introduction and usage of these tools have on students’ opinions and performance, and what kind of implications there are from a teacher’s point of view. The results from studies in this thesis show that students preferred to do web-based exercises, and felt that those exercises contributed to their learning. The usage of the tool motivated students to work harder during their course, which was shown in overall course performance and drop-out statistics. We have also shown that visualization-based tools can be used to enhance the learning process, and one of the key factors is the higher and active level of engagement (see. Engagement Taxonomy by Naps et al., 2002). The automatic grading accompanied with immediate feedback helps students to overcome obstacles during the learning process, and to grasp the key element in the learning task. These kinds of tools can help us to cope with the fact that many programming courses are overcrowded with limited teaching resources. These tools allows us to tackle this problem by utilizing automatic assessment in exercises that are most suitable to be done in the web (like tracing and simulation) since its supports students’ independent learning regardless of time and place. In summary, we can use our course’s resources more efficiently to increase the quality of the learning experience of the students and the teaching experience of the teacher, and even increase performance of the students. There are also methodological results from this thesis which contribute to developing insight into the conduct of empirical evaluations of new tools or techniques. When we evaluate a new tool, especially one accompanied with visualization, we need to give a proper introduction to it and to the graphical notation used by tool. The standard procedure should also include capturing the screen with audio to confirm that the participants of the experiment are doing what they are supposed to do. By taken such measures in the study of the learning impact of visualization support for learning, we can avoid drawing false conclusion from our experiments. As computer science educators, we face two important challenges. Firstly, we need to start to deliver the message in our own institution and all over the world about the new – scientifically proven – innovations in teaching like TRAKLA2 and ViLLE. Secondly, we have the relevant experience of conducting teaching related experiment, and thus we can support our colleagues to learn essential know-how of the research based improvement of their teaching. This change can transform academic teaching into publications and by utilizing this approach we can significantly increase the adoption of the new tools and techniques, and overall increase the knowledge of best-practices. In future, we need to combine our forces and tackle these universal and common problems together by creating multi-national and multiinstitutional research projects. We need to create a community and a platform in which we can share these best practices and at the same time conduct multi-national research projects easily.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Diabetes is a rapidly increasing worldwide problem which is characterised by defective metabolism of glucose that causes long-term dysfunction and failure of various organs. The most common complication of diabetes is diabetic retinopathy (DR), which is one of the primary causes of blindness and visual impairment in adults. The rapid increase of diabetes pushes the limits of the current DR screening capabilities for which the digital imaging of the eye fundus (retinal imaging), and automatic or semi-automatic image analysis algorithms provide a potential solution. In this work, the use of colour in the detection of diabetic retinopathy is statistically studied using a supervised algorithm based on one-class classification and Gaussian mixture model estimation. The presented algorithm distinguishes a certain diabetic lesion type from all other possible objects in eye fundus images by only estimating the probability density function of that certain lesion type. For the training and ground truth estimation, the algorithm combines manual annotations of several experts for which the best practices were experimentally selected. By assessing the algorithm’s performance while conducting experiments with the colour space selection, both illuminance and colour correction, and background class information, the use of colour in the detection of diabetic retinopathy was quantitatively evaluated. Another contribution of this work is the benchmarking framework for eye fundus image analysis algorithms needed for the development of the automatic DR detection algorithms. The benchmarking framework provides guidelines on how to construct a benchmarking database that comprises true patient images, ground truth, and an evaluation protocol. The evaluation is based on the standard receiver operating characteristics analysis and it follows the medical practice in the decision making providing protocols for image- and pixel-based evaluations. During the work, two public medical image databases with ground truth were published: DIARETDB0 and DIARETDB1. The framework, DR databases and the final algorithm, are made public in the web to set the baseline results for automatic detection of diabetic retinopathy. Although deviating from the general context of the thesis, a simple and effective optic disc localisation method is presented. The optic disc localisation is discussed, since normal eye fundus structures are fundamental in the characterisation of DR.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

New luminometric particle-based methods were developed to quantify protein and to count cells. The developed methods rely on the interaction of the sample with nano- or microparticles and different principles of detection. In fluorescence quenching, timeresolved luminescence resonance energy transfer (TR-LRET), and two-photon excitation fluorescence (TPX) methods, the sample prevents the adsorption of labeled protein to the particles. Depending on the system, the addition of the analyte increases or decreases the luminescence. In the dissociation method, the adsorbed protein protects the Eu(III) chelate on the surface of the particles from dissociation at a low pH. The experimental setups are user-friendly and rapid and do not require hazardous test compounds and elevated temperatures. The sensitivity of the quantification of protein (from 40 to 500 pg bovine serum albumin in a sample) was 20-500-fold better than in most sensitive commercial methods. The quenching method exhibited low protein-to-protein variability and the dissociation method insensitivity to the assay contaminants commonly found in biological samples. Less than ten eukaryotic cells were detected and quantified with all the developed methods under optimized assay conditions. Furthermore, two applications, the method for detection of the aggregation of protein and the cell viability test, were developed by utilizing the TR-LRET method. The detection of the aggregation of protein was allowed at a more than 10,000 times lower concentration, 30 μg/L, compared to the known methods of UV240 absorbance and dynamic light scattering. The TR-LRET method was combined with a nucleic acid assay with cell-impermeable dye to measure the percentage of dead cells in a single tube test with cell counts below 1000 cells/tube.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A district heating system comprises production facilities, a distribution network, and heat consumers. The utilization of new energy metering and reading system (AMR) is increasing constantly in district heating systems. This heuristic study shows how the AMR system can be exploited in finding optimization opportunities in district heating system. In this study, the district heating system is mainly considered from the viewpoint of operational optimization. The focus is on the core processes, heat production and distribution. Three objectives were set to this study. The first one was to examine general optimization opportunities in district heating systems. Second, to figure out the benefits of AMR for general optimization opportunities. Finally, to define a methodology for process improvement endeavors. This study shows, through a case study, the usefulness of AMR in specifying current deficiencies in a district heating system. Based on a literature review, the methodology for the improvement of business processes is presented. Additionally, some issues related to future competitiveness of district heating are concerned. As a conclusion, some optimization objectives are considered more desirable than others. Study shows that AMR is useful in the specification of optimization targets in the district heating system. Further steps in optimization process were not examined in detail. That would seem to be interesting topic for further studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Centrifugal pumps are a notable end-consumer of electrical energy. Typical application of a centrifugal pump is the filling or emptying of a reservoir tank, where the pump is often operated at a constant speed until the process is completed. Installing a frequency converter to control the motor substitutes the traditional fixed-speed pumping system, allows the optimization of rotational speed profile for the pumping tasks and enables the estimation of rotational speed and shaft torque of an induction motor without any additional measurements from the motor shaft. Utilization of variable-speed operation provides the possibility to decrease the overall energy consumption of the pumping task. The static head of the pumping process may change during the pumping task. In such systems, the minimum rotational speed changes during reservoir filling or emptying, and the minimum energy consumption can’t be achieved with a fixed rotational speed. This thesis presents embedded algorithms to automatically identify, optimize and monitor pumping processes between supply and destination reservoirs, and evaluates the changing static head –based optimization method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The general trend towards increasing e ciency and energy density drives the industry to high-speed technologies. Active Magnetic Bearings (AMBs) are one of the technologies that allow contactless support of a rotating body. Theoretically, there are no limitations on the rotational speed. The absence of friction, low maintenance cost, micrometer precision, and programmable sti ness have made AMBs a viable choice for highdemanding applications. Along with the advances in power electronics, such as signi cantly improved reliability and cost, AMB systems have gained a wide adoption in the industry. The AMB system is a complex, open-loop unstable system with multiple inputs and outputs. For normal operation, such a system requires a feedback control. To meet the high demands for performance and robustness, model-based control techniques should be applied. These techniques require an accurate plant model description and uncertainty estimations. The advanced control methods require more e ort at the commissioning stage. In this work, a methodology is developed for an automatic commissioning of a subcritical, rigid gas blower machine. The commissioning process includes open-loop tuning of separate parts such as sensors and actuators. The next step is to apply a system identi cation procedure to obtain a model for the controller synthesis. Finally, a robust model-based controller is synthesized and experimentally evaluated in the full operating range of the system. The commissioning procedure is developed by applying only the system components available and a priori knowledge without any additional hardware. Thus, the work provides an intelligent system with a self-diagnostics feature and an automatic commissioning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ride comfort of elevators is one of the quality criteria valued by customers. The objective of this master’s thesis was to develop a process to measure the ride comfort of automatic elevator doors. The door’s operational noise was chosen as a focus area and other kinds of noise for example caused by pressure differences in the elevator shaft were excluded. The thesis includes a theory part and an empirical part. In the first part theories of quality management, measuring of quality and acoustics are presented. In the empirical part the developed ride comfort measuring process is presented, different operational noise sources are analyzed and an example is presented of how this measuring process can be used to guide product development. To measure ride comfort a process was developed where a two-room silent room was used as a measuring environment and EVA-625 device was used in the actual measuring of door noise. A-weighted decibels were used to scale noise pressure levels and the door movement was monitored with an accelerometer. This enabled the connection between the noise and noise sources which in turn helped to find potential ride comfort improvement ideas. The noise isolation class was also measured with the Ivie-measuring system. Measuring of door ride comfort gives feedback to product development and to managing current product portfolio. Measuring enables the continuous improvement of elevator door ride comfort. The measuring results can also be used to back up marketing arguments for doors.