57 resultados para Large machines
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
To know how much misalignment is tolerable for a particle accelerator is an important input for the design of these machines. In particle accelerators the beam must be guided and focused using bending magnets and magnetic lenses, respectively. The alignment of the lenses along a transport line aims to ensure that the beam passes through their optical axes and represents a critical point in the assembly of the machine. There are more and more accelerators in the world, many of which are very small machines. Because the existing literature and programs are mostly targeted for large machines. in this work we describe a method suitable for small machines. This method consists in determining statistically the alignment tolerance in a set of lenses. Differently from the methods used in standard simulation codes for particle accelerators, the statistical method we propose makes it possible to evaluate particle losses as a function of the alignment accuracy of the optical elements in a transport line. Results for 100 key electrons, on the 3.5-m long conforming beam stage of the IFUSP Microtron are presented as an example of use. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
In testing from a Finite State Machine (FSM), the generation of test suites which guarantee full fault detection, known as complete test suites, has been a long-standing research topic. In this paper, we present conditions that are sufficient for a test suite to be complete. We demonstrate that the existing conditions are special cases of the proposed ones. An algorithm that checks whether a given test suite is complete is given. The experimental results show that the algorithm can be used for relatively large FSMs and test suites.
Resumo:
PURPOSE: The objective of this paper is to report the clinical case of a patient who presented a chronic apical periodontitis, arising from internal inflammatory resorption followed by pulp necrosis, and a long-term success of a root canal therapy using calcium hydroxide as root canal dressing. CASE DESCRIPTION: A 20-year-old male patient presented for routine dental treatment. By radiographic examination we noted an extensive radioluscent area, laterally to the permanent maxillary right lateral incisor, with possibility of communication with the lateral periodontium, suggestive of a chronic apical periodontitis. Due to external root resorption detection, we used a calcium hydroxide root canal dressing, changed every 15 days, for a period of 2 months. Root canal filling was performed using gutta-percha cones by lateral condensation technique Radiographic follow up held after 19 years of treatment indicated a periodontium in conditions of normality, with the presence of lamina dura. CONCLUSION: Calcium hydroxide is a suitable material to be used as root canal dressing in teeth with apical periodontitis. Long-term evaluation demonstrated the satisfactory clinical outcome following root canal treatment.
Resumo:
Marajó Island shows an abundance of paleochannels easily mapped in its eastern portion, where vegetation consists mostly of savannas. SRTM data make possible to recognize paleochannels also in western Marajó, even considering the dense forest cover. A well preserved paleodrainage network from the adjacency of the town of Breves (southwestern Marajó Island) was investigated in this work combining remote sensing and sedimentological studies. The palimpsest drainage system consists of a large meander connected to narrower tributaries. Sedimentological studies revealed mostly sharp-based, fining upward sands for the channelized features, and interbedded muds and sands for floodplain areas. The sedimentary structures and facies successions are in perfect agreement with deposition in channelized and floodplain environments, as suggested by remote sensing mapping. The present study shows that this paleodrainage was abandoned during Late Pleistocene, slightly earlier than the Holocene paleochannel systems from the east part of the island. Integration of previous studies with the data available herein supports a tectonic origin, related to the opening of the Pará River along fault lineaments. This would explain the disappearance of large, north to northeastward migrating channel systems in southwestern Marajó Island, which were replaced by the much narrower, south to southeastward flowing modern channels.
Resumo:
Deformation leads to a hardening of steel due to an increase in the density of dislocations and a reduction in their mobility, giving rise to a state of elevated residual stresses in the crystal lattice. In the microstructure, one observes an increase in the contribution of crystalline orientations which are unfavorable to the magnetization, as seen, for example, by a decrease in B(50), the magnetic flux density at a field of 50 A/cm. The present study was carried out with longitudinal strips of fully processed non-oriented (NO) electrical steel, with deformations up to 70% resulting from cold rolling in the longitudinal direction. With increasing plastic deformation, the value of B(50) gradually decreases until it reaches a minimum value, where it remains even for larger deformations. On the other hand, the coercive field H(c) continually increases. Magnetometry results and electron backscatter diffraction results are compared and discussed. (C) 2011 American Institute of Physics. [doi: 10.1063/1.3560895]
Resumo:
Large-conductance Ca(2+)-activated K(+) channels (BK) play a fundamental role in modulating membrane potential in many cell types. The gating of BK channels and its modulation by Ca(2+) and voltage has been the subject of intensive research over almost three decades, yielding several of the most complicated kinetic mechanisms ever proposed. A large number of open and closed states disposed, respectively, in two planes, named tiers, characterize these mechanisms. Transitions between states in the same plane are cooperative and modulated by Ca(2+). Transitions across planes are highly concerted and voltage-dependent. Here we reexamine the validity of the two-tiered hypothesis by restricting attention to the modulation by Ca(2+). Large single channel data sets at five Ca(2+) concentrations were simultaneously analyzed from a Bayesian perspective by using hidden Markov models and Markov-chain Monte Carlo stochastic integration techniques. Our results support a dramatic reduction in model complexity, favoring a simple mechanism derived from the Monod-Wyman-Changeux allosteric model for homotetramers, able to explain the Ca(2+) modulation of the gating process. This model differs from the standard Monod-Wyman-Changeux scheme in that one distinguishes when two Ca(2+) ions are bound to adjacent or diagonal subunits of the tetramer.
Resumo:
A smooth inflaton potential is generally assumed when calculating the primordial power spectrum, implicitly assuming that a very small oscillation in the inflaton potential creates a negligible change in the predicted halo mass function. We show that this is not true. We find that a small oscillating perturbation in the inflaton potential in the slow-roll regime can alter significantly the predicted number of small halos. A class of models derived from supergravity theories gives rise to inflaton potentials with a large number of steps and many trans-Planckian effects may generate oscillations in the primordial power spectrum. The potentials we study are the simple quadratic (chaotic inflation) potential with superimposed small oscillations for small field values. Without leaving the slow-roll regime, we find that for a wide choice of parameters, the predicted number of halos change appreciably. For the oscillations beginning in the 10(7)-10(8) M(circle dot) range, for example, we find that only a 5% change in the amplitude of the chaotic potential causes a 50% suppression of the number of halos for masses between 10(7)-10(8) M(circle dot) and an increase in the number of halos for masses <10(6) M(circle dot) by factors similar to 15-50. We suggest that this might be a solution to the problem of the lack of observed dwarf galaxies in the range 10(7)-10(8) M(circle dot). This might also be a solution to the reionization problem where a very large number of Population III stars in low mass halos are required.
Resumo:
The lightest supersymmetric particle may decay with branching ratios that correlate with neutrino oscillation parameters. In this case the CERN Large Hadron Collider (LHC) has the potential to probe the atmospheric neutrino mixing angle with sensitivity competitive to its low-energy determination by underground experiments. Under realistic detection assumptions, we identify the necessary conditions for the experiments at CERN's LHC to probe the simplest scenario for neutrino masses induced by minimal supergravity with bilinear R parity violation.
Resumo:
We consider a model where sterile neutrinos can propagate in a large compactified extra dimension giving rise to Kaluza-Klein (KK) modes and the standard model left-handed neutrinos are confined to a 4-dimensional spacetime brane. The KK modes mix with the standard neutrinos modifying their oscillation pattern. We examine former and current experiments such as CHOOZ, KamLAND, and MINOS to estimate the impact of the possible presence of such KK modes on the determination of the neutrino oscillation parameters and simultaneously obtain limits on the size of the largest extra dimension. We found that the presence of the KK modes does not essentially improve the quality of the fit compared to the case of the standard oscillation. By combining the results from CHOOZ, KamLAND, and MINOS, in the limit of a vanishing lightest neutrino mass, we obtain the stronger bound on the size of the extra dimension as similar to 1.0(0.6) mu m at 99% C.L. for normal (inverted) mass hierarchy. If the lightest neutrino mass turns out to be larger, 0.2 eV, for example, we obtain the bound similar to 0.1 mu m. We also discuss the expected sensitivities on the size of the extra dimension for future experiments such as Double CHOOZ, T2K, and NO nu A.
Resumo:
Large scale enzymatic resolution of racemic sulcatol 2 has been useful for stereoselective biocatalysis. This reaction was fast and selective, using vinyl acetate as donor of acyl group and lipase from Candida antarctica (CALB) as catalyst. The large scale reaction (5.0 g, 39 mmol) afforded high optical purities for S-(+)-sulcatol 2 and R-(+)-sulcatyl acetate 3, i.e., ee > 99 per cent and good yields (45 per cent) within a short time (40 min). Thermodynamic parameters for the chemoesterification of sulcatol 2 by vinyl acetate were evaluated. The enthalpy and Gibbs free energy values of this reaction were negative, indicating that this process is exothermic and spontaneous which is in agreement with the reaction obtained enzymatically.
Resumo:
Age-related changes in running kinematics have been reported in the literature using classical inferential statistics. However, this approach has been hampered by the increased number of biomechanical gait variables reported and subsequently the lack of differences presented in these studies. Data mining techniques have been applied in recent biomedical studies to solve this problem using a more general approach. In the present work, we re-analyzed lower extremity running kinematic data of 17 young and 17 elderly male runners using the Support Vector Machine (SVM) classification approach. In total, 31 kinematic variables were extracted to train the classification algorithm and test the generalized performance. The results revealed different accuracy rates across three different kernel methods adopted in the classifier, with the linear kernel performing the best. A subsequent forward feature selection algorithm demonstrated that with only six features, the linear kernel SVM achieved 100% classification performance rate, showing that these features provided powerful combined information to distinguish age groups. The results of the present work demonstrate potential in applying this approach to improve knowledge about the age-related differences in running gait biomechanics and encourages the use of the SVM in other clinical contexts. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The power loss reduction in distribution systems (DSs) is a nonlinear and multiobjective problem. Service restoration in DSs is even computationally hard since it additionally requires a solution in real-time. Both DS problems are computationally complex. For large-scale networks, the usual problem formulation has thousands of constraint equations. The node-depth encoding (NDE) enables a modeling of DSs problems that eliminates several constraint equations from the usual formulation, making the problem solution simpler. On the other hand, a multiobjective evolutionary algorithm (EA) based on subpopulation tables adequately models several objectives and constraints, enabling a better exploration of the search space. The combination of the multiobjective EA with NDE (MEAN) results in the proposed approach for solving DSs problems for large-scale networks. Simulation results have shown the MEAN is able to find adequate restoration plans for a real DS with 3860 buses and 632 switches in a running time of 0.68 s. Moreover, the MEAN has shown a sublinear running time in function of the system size. Tests with networks ranging from 632 to 5166 switches indicate that the MEAN can find network configurations corresponding to a power loss reduction of 27.64% for very large networks requiring relatively low running time.
Resumo:
State of Sao Paulo Research Foundation (FAPESP)
Resumo:
During the last few years, the evolution of fieldbus and computers networks allowed the integration of different communication systems involving both production single cells and production cells, as well as other systems for business intelligence, supervision and control. Several well-adopted communication technologies exist today for public and non-public networks. Since most of the industrial applications are time-critical, the requirements of communication systems for remote control differ from common applications for computer networks accessing the Internet, such as Web, e-mail and file transfer. The solution proposed and outlined in this work is called CyberOPC. It includes the study and the implementation of a new open communication system for remote control of industrial CNC machines, making the transmission delay for time-critical control data shorter than other OPC-based solutions, and fulfilling cyber security requirements.