913 resultados para point-to-point speed cameras
Resumo:
We introduce a method for surface reconstruction from point sets that is able to cope with noise and outliers. First, a splat-based representation is computed from the point set. A robust local 3D RANSAC-based procedure is used to filter the point set for outliers, then a local jet surface - a low-degree surface approximation - is fitted to the inliers. Second, we extract the reconstructed surface in the form of a surface triangle mesh through Delaunay refinement. The Delaunay refinement meshing approach requires computing intersections between line segment queries and the surface to be meshed. In the present case, intersection queries are solved from the set of splats through a 1D RANSAC procedure
Resumo:
A simple cloud point extraction procedure is presented for the preconcentration of copper in various samples. After complexation by 4-hydroxy-2-mercapto-6-propylpyrimidine (PTU), copper ions are quantitatively extracted into the phase rich in Triton X-114 after centrifugation. Methanol acidified with 0.5 mol L-1 HNO3 was added to the surfactant-rich phase prior to its analysis by flame atomic absorption spectrometry (FAAS). Analytical parameters including concentrations for PTU, Triton X-114 and HNO3, bath temperature, centrifugation rate and time were optimized. The influences of the matrix ions on the recoveries of copper ions were investigated. The detection limits (3SDb/m, n=4) of 1.6 ng mL-1 along with enrichment factors of 30 for Cu were achieved. The proposed procedure was applied to the analysis of environmental samples.
Resumo:
A new cloud point extraction (CPE) method was developed for the separation and preconcentration of copper (II) prior to spectrophotometric analysis. For this purpose, 1-(2,4-dimethylphenyl) azonapthalen-2-ol (Sudan II) was used as a chelating agent and the solution pH was adjusted to 10.0 with borate buffer. Polyethylene glycol tert-octylphenyl ether (Triton X-114) was used as an extracting agent in the presence of sodium dodecylsulphate (SDS). After phase separation, based on the cloud point of the mixture, the surfactant-rich phase was diluted with acetone, and the enriched analyte was spectrophotometrically determined at 537 nm. The variables affecting CPE efficiency were optimized. The calibration curve was linear within the range 0.285-20 µg L-1 with a detection limit of 0.085 µg L-1. The method was successfully applied to the quantification of copper in different beverage samples.
Resumo:
The quantitative structure property relationship (QSPR) for the boiling point (Tb) of polychlorinated dibenzo-p-dioxins and polychlorinated dibenzofurans (PCDD/Fs) was investigated. The molecular distance-edge vector (MDEV) index was used as the structural descriptor. The quantitative relationship between the MDEV index and Tb was modeled by using multivariate linear regression (MLR) and artificial neural network (ANN), respectively. Leave-one-out cross validation and external validation were carried out to assess the prediction performance of the models developed. For the MLR method, the prediction root mean square relative error (RMSRE) of leave-one-out cross validation and external validation was 1.77 and 1.23, respectively. For the ANN method, the prediction RMSRE of leave-one-out cross validation and external validation was 1.65 and 1.16, respectively. A quantitative relationship between the MDEV index and Tb of PCDD/Fs was demonstrated. Both MLR and ANN are practicable for modeling this relationship. The MLR model and ANN model developed can be used to predict the Tb of PCDD/Fs. Thus, the Tb of each PCDD/F was predicted by the developed models.
Resumo:
The aim of this thesis is to investigate the thermal loading of medium voltage three-level NPC inverter’s semiconductor IGCT switches in different operation points. The objective is to reach both a fairly accurate off-line simulation program and also so simple a simulation model that its implementation into an embedded system could be reasonable in practice and a real time use should become feasible. Active loading limitation of the inverter can be realized with a thermal model which is practical in a real time use. Determining of the component heating has been divided into two parts; defining of component losses and establishing the structure of a thermal network. Basics of both parts are clarified. The simulation environment is Matlab-Simulink. Two different models are constructed – a more accurate one and a simplified one. Potential simplifications are clarified with the help of the first one. Simplifications are included in the latter model and the functionalities of both models are compared. When increasing the calculation time step a decreased number of considered components and time constants of the thermal network can be used in the simplified model. Heating of a switching component is dependent on its topological position and inverter’s operation point. The output frequency of the converter defines mainly which one of the switching components is – because of its losses and heating – the performance limiting component of the converter. Comparison of results given by different thermal models demonstrates that with larger time steps, describing of fast occurring switching losses becomes difficult. Generally articles and papers dealing with this subject are written for two-level inverters. Also inverters which apply direct torque control (DTC) are investigated rarely from the heating point of view. Hence, this thesis completes the former material.
Resumo:
This paper studies the relationship of earnings management and investors. Analysis of incentives reveals that most of them are opportunistic in nature. Unfortunately the investor would need insider information to distinguish between different forms of earnings management. Investors in some countries seem to devalue earnings when government body has signaled that earnings management might be involved, unfortunately without a clear signal the behavior seems reverse among non-institutional investors.
Resumo:
In the current economy situation companies try to reduce their expenses. One of the solutions is to improve the energy efficiency of the processes. It is known that the energy consumption of pumping applications range from 20 up to 50% of the energy usage in the certain industrial plants operations. Some studies have shown that 30% to 50% of energy consumed by pump systems could be saved by changing the pump or the flow control method. The aim of this thesis is to create a mobile measurement system that can calculate a working point position of a pump drive. This information can be used to determine the efficiency of the pump drive operation and to develop a solution to bring pump’s efficiency to a maximum possible value. This can allow a great reduction in the pump drive’s life cycle cost. In the first part of the thesis, a brief introduction in the details of pump drive operation is given. Methods that can be used in the project are presented. Later, the review of available platforms for the project implementation is given. In the second part of the thesis, components of the project are presented. Detailed description for each created component is given. Finally, results of laboratory tests are presented. Acquired results are compared and analyzed. In addition, the operation of created system is analyzed and suggestions for the future development are given.
Resumo:
In order to verify Point-Centered Quarter Method (PCQM) accuracy and efficiency, using different numbers of individuals by per sampled area, in 28 quarter points in an Araucaria forest, southern Paraná, Brazil. Three variations of the PCQM were used for comparison associated to the number of sampled individual trees: standard PCQM (SD-PCQM), with four sampled individuals by point (one in each quarter), second measured (VAR1-PCQM), with eight sampled individuals by point (two in each quarter), and third measuring (VAR2-PCQM), with 16 sampled individuals by points (four in each quarter). Thirty-one species of trees were recorded by the SD-PCQM method, 48 by VAR1-PCQM and 60 by VAR2-PCQM. The level of exhaustiveness of the vegetation census and diversity index showed an increasing number of individuals considered by quadrant, indicating that VAR2-PCQM was the most accurate and efficient method when compared with VAR1-PCQM and SD-PCQM.
Resumo:
We live in an age where rationalization and demands of efficiency taint every aspect of our lives both as individuals and as a society. Even warfare cannot escape the increased speed of human interaction. Time is a resource to be managed. It has to be optimized, saved and won in military affairs as well. The purpose of this research paper is to analyze the dogmatic texts of military thought to search for answers what the classics of strategy saw in the interrelations of temporality and warfare and if their thoughts remain meaningful in the contemporary conjunction. Since the way a society functions is reflected in the way it conducts its wars, there naturally are differences between an agrarian, industrial and information society. Theorists of different eras emphasize things specific to their times, but warfare, like any human interaction, is always bounded by temporality. Not only is the pace of warfare dependent on the progress of the society, but time permeates warfare in all its aspects. This research paper focuses on two specific topics that arose from the texts themselves; how should time be managed and manipulated in warfare and how to economize and “win” it from the enemy. A method where lengthy quotations are used to illustrate the main point of the strategists has been chosen for this research paper. While Clausewitz is the most prominent source of quotations, thoughts from ancient India and China are represented as well to prove that the combination of right force in the right place at the right time is still the way of the victorious. Tactics change in the course of time but the principles of strategy remain unaltered and are only adapted to suit new situations. While ancient and pre-modern societies had their focus on finding auspicious moments for battle in the flow of kronos-time based on divinities, portents and auguries, we can trace elements of manipulation of time in warfare from the earliest surviving texts. While time as a fourth dimension of the battlespace emerged only in the modern era, all through the history of military thought it has had a profound meaning. In the past time could be squandered, today it always has to be won. This paper asks the question “why”.
Resumo:
Superconductor – normal metal point contacts were investigated, using different combinations of Cu, brass, Nb and NbTi. The resulting spectra contained side peaks. The currents at which these side peaks appeared, depended on the radii of the contacts. For contacts with Nb this dependence was quadratic, while for contacts with NbTi it was linear. Based on this, we argue that the side peaks in the case of the Nb contacts are due to the critical current density being exceeded. In contrast, side peaks of the NbTi contacts are caused by the self-magnetic field exceeding the lower critical field of NbTi. The NbTi contacts did not show the expected contribution from the vanishing Maxwell resistance of the superconductor, a question which remained open.
Resumo:
Some material aspects such as grain size, purity and anisotropy exert an important influence on surface quality, especially in single point diamond turning. The aim of this paper is to present and discuss some critical factors that can limit the accuracy of ultraprecision machining of non-ferrous metals and to identify the effects of them on the cutting mechanism with single point diamond tools. This will be carried out through observations of machined surfaces and chips produced using optical and scanning electron microscopy. Solutions to reduce the influence of some of these limiting factors related with the mechanism of generation of mirror-like surfaces will be discussed.
Resumo:
Fan systems are responsible for approximately 10% of the electricity consumption in industrial and municipal sectors, and it has been found that there is energy-saving potential in these systems. To this end, variable speed drives (VSDs) are used to enhance the efficiency of fan systems. Usually, fan system operation is optimized based on measurements of the system, but there are seldom readily installed meters in the system that can be used for the purpose. Thus, sensorless methods are needed for the optimization of fan system operation. In this thesis, methods for the fan operating point estimation with a variable speed drive are studied and discussed. These methods can be used for the energy efficient control of the fan system without additional measurements. The operation of these methods is validated by laboratory measurements and data from an industrial fan system. In addition to their energy consumption, condition monitoring of fan systems is a key issue as fans are an integral part of various production processes. Fan system condition monitoring is usually carried out with vibration measurements, which again increase the system complexity. However, variable speed drives can already be used for pumping system condition monitoring. Therefore, it would add to the usability of a variablespeed- driven fan system if the variable speed drive could be used as a condition monitoring device. In this thesis, sensorless detection methods for three lifetime-reducing phenomena are suggested: these are detection of the fan contamination build-up, the correct rotational direction, and the fan surge. The methods use the variable speed drive monitoring and control options for the detection along with simple signal processing methods, such as power spectrum density estimates. The methods have been validated by laboratory measurements. The key finding of this doctoral thesis is that a variable speed drive can be used on its own as a monitoring and control device for the fan system energy efficiency, and it can also be used in the detection of certain lifetime-reducing phenomena.
Resumo:
Tool center point calibration is a known problem in industrial robotics. The major focus of academic research is to enhance the accuracy and repeatability of next generation robots. However, operators of currently available robots are working within the limits of the robot´s repeatability and require calibration methods suitable for these basic applications. This study was conducted in association with Stresstech Oy, which provides solutions for manufacturing quality control. Their sensor, based on the Barkhausen noise effect, requires accurate positioning. The accuracy requirement admits a tool center point calibration problem if measurements are executed with an industrial robot. Multiple possibilities are available in the market for automatic tool center point calibration. Manufacturers provide customized calibrators to most robot types and tools. With the handmade sensors and multiple robot types that Stresstech uses, this would require great deal of labor. This thesis introduces a calibration method that is suitable for all robots which have two digital input ports free. It functions with the traditional method of using a light barrier to detect the tool in the robot coordinate system. However, this method utilizes two parallel light barriers to simultaneously measure and detect the center axis of the tool. Rotations about two axes are defined with the center axis. The last rotation about the Z-axis is calculated for tools that have different width of X- and Y-axes. The results indicate that this method is suitable for calibrating the geometric tool center point of a Barkhausen noise sensor. In the repeatability tests, a standard deviation inside robot repeatability was acquired. The Barkhausen noise signal was also evaluated after recalibration and the results indicate correct calibration. However, future studies should be conducted using a more accurate manipulator, since the method employs the robot itself as a measuring device.
Resumo:
We have developed a procedure for nonradioactive single strand conformation polymorphism analysis and applied it to the detection of point mutations in the human tumor suppressor gene p53. The protocol does not require any particular facilities or equipment, such as radioactive handling, large gel units for sequencing, or a semiautomated electrophoresis system. This technique consists of amplification of DNA fragments by PCR with specific oligonucleotide primers, denaturation, and electrophoresis on small neutral polyacrylamide gels, followed by silver staining. The sensitivity of this procedure is comparable to other described techniques and the method is easy to perform and applicable to a variety of tissue specimens.
Resumo:
This thesis presents point-contact measurements between superconductors (Nb, Ta, Sn,Al, Zn) and ferromagnets (Co, Fe, Ni) as well as non-magnetic metals (Ag, Au, Cu, Pt).The point contacts were fabricated using the shear method. The differential resistanceof the contacts was measured either in liquid He at 4.2 K or in vacuum in a dilutionrefrigerator at varying temperature down to 0.1 K. The contact properties were investigatedas function of size and temperature. The measured Andreev-reflection spectrawere analysed in the framework of the BTK model – a three parameter model that describescurrent transport across a superconductor - normal conductor interface. Theoriginal BTK model was modified to include the effects of spin polarization or finitelifetime of the Cooper pairs. Our polarization values for the ferromagnets at 4.2 K agree with the literature data, but the analysis was ambiguous because the experimental spectra both with ferromagnets and non-magnets could be described equally well either with spin polarization or finite lifetime effects in the BTK model. With the polarization model the Z parametervaries from almost 0 to 0.8 while the lifetime model produces Z values close to 0.5. Measurements at lower temperatures partly lift this ambiguity because the magnitude of thermal broadening is small enough to separate lifetime broadening from the polarization. The reduced magnitude of the superconducting anomalies for Zn-Fe contacts required an additional modification of the BTK model which was implemented as a scaling factor. Adding this parameter led to reduced polarization values. However, reliable data is difficult to obtain because different parameter sets produce almost identical spectra.