842 resultados para data movement problem
Resumo:
What can we learn from solar neutrino observations? Is there any solution to the solar neutrino anomaly which is favored by the present experimental panorama? After SNO results, is it possible to affirm that neutrinos have mass? In order to answer such questions we analyze the current available data from the solar neutrino experiments, including the recent SNO result, in view of many acceptable solutions to the solar neutrino problem based on different conversion mechanisms, for the first time using the same statistical procedure. This allows us to do a direct comparison of the goodness of the fit among different solutions, from which we can discuss and conclude on the current status of each proposed dynamical mechanism. These solutions are based on different assumptions: (a) neutrino mass and mixing, (b) a nonvanishing neutrino magnetic moment, (c) the existence of nonstandard flavor-changing and nonuniversal neutrino interactions, and (d) a tiny violation of the equivalence principle. We investigate the quality of the fit provided by each one of these solutions not only to the total rate measured by all the solar neutrino experiments but also to the recoil electron energy spectrum measured at different zenith angles by the Super-Kamiokande Collaboration. We conclude that several nonstandard neutrino flavor conversion mechanisms provide a very good fit to the experimental data which is comparable with (or even slightly better than) the most famous solution to the solar neutrino anomaly based on the neutrino oscillation induced by mass.
Resumo:
This paper deals with an energy pumping that occurs in a (MEMS) Gyroscope nonlinear dynamical system, modeled with a proof mass constrained to move in a plane with two resonant modes, which are nominally orthogonal. The two modes are ideally coupled only by the rotation of the gyro about the plane's normal vector. We also developed a linear optimal control design for reducing the oscillatory movement of the nonlinear systems to a stable point.
Resumo:
Processing efficiency theory predicts that anxiety reduces the processing capacity of working memory and has detrimental effects on performance. When tasks place little demand on working memory, the negative effects of anxiety can be avoided by increasing effort. Although performance efficiency decreases, there is no change in performance effectiveness. When tasks impose a heavy demand on working memory, however, anxiety leads to decrements in efficiency and effectiveness. These presumptions were tested using a modified table tennis task that placed low (LWM) and high (HWM) demands on working memory. Cognitive anxiety was manipulated through a competitive ranking structure and prize money. Participants' accuracy in hitting concentric circle targets in predetermined sequences was taken as a measure of performance effectiveness, while probe reaction time (PRT), perceived mental effort (RSME), visual search data, and arm kinematics were recorded as measures of efficiency. Anxiety had a negative effect on performance effectiveness in both LWM and HWM tasks. There was an increase in frequency of gaze and in PRT and RSME values in both tasks under high vs. low anxiety conditions, implying decrements in performance efficiency. However, participants spent more time tracking the ball in the HWM task and employed a shorter tau margin when anxious. Although anxiety impaired performance effectiveness and efficiency, decrements in efficiency were more pronounced in the HWM task than in the LWM task, providing support for processing efficiency theory.
Resumo:
Objective: To determine the immediate and longer-term effect(s) on tongue movement following the placement of an experimental opening through a palatal obturator (replicate of subject's prosthesis) worn by an adult male with an unrepaired cleft of the hard and soft palate.Methods: Tongue movements associated with an anterior experimental opening of 20 mm(2) were examined under three conditions: a control condition in which the subject wore the experimental obturator completely occluded, a condition immediately after drilling the experimental openings through the obturator, and a condition after 5 days in which the subject wore the experimental obturator with the experimental opening. An Electromagnetic Articulograph was used for obtaining tongue movements during speech.Results: the findings partly revealed that the immediate introduction of a perturbation to the speech system (experimental fistula) had a temporary effect on tongue movement. After sustained perturbation (for 5 days), the system normalized (going back toward control condition's behavior). Perceptual data were consistent with kinematic tongue movement direction in most of the cases.Conclusions: Although the immediate response can be interpreted as indicative of the subject's attempts to move the tongue toward the opening to compensate for air loss, the findings following a sustained perturbation indicate that with time, other physiological adjustments (such as respiratory adjustments, for example) may help reestablish the requirements of a pressure-regulating system.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The Bouguer gravity anomaly of the northwest Ceará state in north-central Brazil was separated into its regional and residual components which were interpreted separately. By assuming that the sources of the regional anomalies are the depth variations of the crust-mantle interface, the mapping of these variations permited identifying crustal thickening zones which may be related to regional structures. The gravity residual sources coincide with occurrences of high-grade rocks (granulites) associated to medium-grade gneisses. Besides, the major strike-slip zones present significant signatures in the gravity data. This geophysical interpretation is compatible with the interpretation that the tectonic framework of the area is related to two crustal blocks conjoined by an A-type suture. The blocks are displaced along an oblique ramp with dextral movement, which played an important role in uplifting high-grade rocks from the lower crust to upper crustal levels. The suture zone corresponds to an imbricated compressive system dipping to the east and complicated by late dextral strike-slip shear zones.
Resumo:
Minimization of a differentiable function subject to box constraints is proposed as a strategy to solve the generalized nonlinear complementarity problem (GNCP) defined on a polyhedral cone. It is not necessary to calculate projections that complicate and sometimes even disable the implementation of algorithms for solving these kinds of problems. Theoretical results that relate stationary points of the function that is minimized to the solutions of the GNCP are presented. Perturbations of the GNCP are also considered, and results are obtained related to the resolution of GNCPs with very general assumptions on the data. These theoretical results show that local methods for box-constrained optimization applied to the associated problem are efficient tools for solving the GNCP. Numerical experiments are presented that encourage the use of this approach.
Resumo:
Interactive visual representations complement traditional statistical and machine learning techniques for data analysis, allowing users to play a more active role in a knowledge discovery process and making the whole process more understandable. Though visual representations are applicable to several stages of the knowledge discovery process, a common use of visualization is in the initial stages to explore and organize a sometimes unknown and complex data set. In this context, the integrated and coordinated - that is, user actions should be capable of affecting multiple visualizations when desired - use of multiple graphical representations allows data to be observed from several perspectives and offers richer information than isolated representations. In this paper we propose an underlying model for an extensible and adaptable environment that allows independently developed visualization components to be gradually integrated into a user configured knowledge discovery application. Because a major requirement when using multiple visual techniques is the ability to link amongst them, so that user actions executed on a representation propagate to others if desired, the model also allows runtime configuration of coordinated user actions over different visual representations. We illustrate how this environment is being used to assist data exploration and organization in a climate classification problem.
Resumo:
Until mid 2006, SCIAMACHY data processors for the operational retrieval of nitrogen dioxide (NO2) column data were based on the historical version 2 of the GOME Data Processor (GDP). On top of known problems inherent to GDP 2, ground-based validations of SCIAMACHY NO2 data revealed issues specific to SCIAMACHY, like a large cloud-dependent offset occurring at Northern latitudes. In 2006, the GDOAS prototype algorithm of the improved GDP version 4 was transferred to the off-line SCIAMACHY Ground Processor (SGP) version 3.0. In parallel, the calibration of SCIAMACHY radiometric data was upgraded. Before operational switch-on of SGP 3.0 and public release of upgraded SCIAMACHY NO2 data, we have investigated the accuracy of the algorithm transfer: (a) by checking the consistency of SGP 3.0 with prototype algorithms; and (b) by comparing SGP 3.0 NO2 data with ground-based observations reported by the WMO/GAW NDACC network of UV-visible DOAS/SAOZ spectrometers. This delta-validation study concludes that SGP 3.0 is a significant improvement with respect to the previous processor IPF 5.04. For three particular SCIAMACHY states, the study reveals unexplained features in the slant columns and air mass factors, although the quantitative impact on SGP 3.0 vertical columns is not significant.
Resumo:
An agent based model for spatial electric load forecasting using a local movement approach for the spatiotemporal allocation of the new loads in the service zone is presented. The density of electrical load for each of the major consumer classes in each sub-zone is used as the current state of the agents. The spatial growth is simulated with a walking agent who starts his path in one of the activity centers of the city and goes to the limits of the city following a radial path depending on the different load levels. A series of update rules are established to simulate the S growth behavior and the complementarity between classes. The results are presented in future load density maps. The tests in a real system from a mid-size city show a high rate of success when compared with other techniques. The most important features of this methodology are the need for few data and the simplicity of the algorithm, allowing for future scalability. © 2009 IEEE.
Resumo:
This paper proposes a cluster partitioning technique to calculate improved upper bounds to the optimal solution of maximal covering location problems. Given a covering distance, a graph is built considering as vertices the potential facility locations, and with an edge connecting each pair of facilities that attend a same client. Coupling constraints, corresponding to some edges of this graph, are identified and relaxed in the Lagrangean way, resulting in disconnected subgraphs representing smaller subproblems that are computationally easier to solve by exact methods. The proposed technique is compared to the classical approach, using real data and instances from the available literature. © 2010 Edson Luiz França Senne et al.
Resumo:
Includes bibliography
Resumo:
In this paper a framework based on the decomposition of the first-order optimality conditions is described and applied to solve the Probabilistic Power Flow (PPF) problem in a coordinated but decentralized way in the context of multi-area power systems. The purpose of the decomposition framework is to solve the problem through a process of solving smaller subproblems, associated with each area of the power system, iteratively. This strategy allows the probabilistic analysis of the variables of interest, in a particular area, without explicit knowledge of network data of the other interconnected areas, being only necessary to exchange border information related to the tie-lines between areas. An efficient method for probabilistic analysis, considering uncertainty in n system loads, is applied. The proposal is to use a particular case of the point estimate method, known as Two-Point Estimate Method (TPM), rather than the traditional approach based on Monte Carlo simulation. The main feature of the TPM is that it only requires resolve 2n power flows for to obtain the behavior of any random variable. An iterative coordination algorithm between areas is also presented. This algorithm solves the Multi-Area PPF problem in a decentralized way, ensures the independent operation of each area and integrates the decomposition framework and the TPM appropriately. The IEEE RTS-96 system is used in order to show the operation and effectiveness of the proposed approach and the Monte Carlo simulations are used to validation of the results. © 2011 IEEE.
Resumo:
This paper proposes a tabu search approach to solve the Synchronized and Integrated Two-Level Lot Sizing and Scheduling Problem (SITLSP). It is a real-world problem, often found in soft drink companies, where the production process has two integrated levels with decisions concerning raw material storage and soft drink bottling. Lot sizing and scheduling of raw materials in tanks and products in bottling lines must be simultaneously determined. Real data provided by a soft drink company is used to make comparisons with a previous genetic algorithm. Computational results have demonstrated that tabu search outperformed genetic algorithm in all instances. Copyright 2011 ACM.
Resumo:
Detecting misbehavior (such as transmissions of false information) in vehicular ad hoc networks (VANETs) is a very important problem with wide range of implications, including safety related and congestion avoidance applications. We discuss several limitations of existing misbehavior detection schemes (MDS) designed for VANETs. Most MDS are concerned with detection of malicious nodes. In most situations, vehicles would send wrong information because of selfish reasons of their owners, e.g. for gaining access to a particular lane. It is therefore more important to detect false information than to identify misbehaving nodes. We introduce the concept of data-centric misbehavior detection and propose algorithms which detect false alert messages and misbehaving nodes by observing their actions after sending out the alert messages. With the data-centric MDS, each node can decide whether an information received is correct or false. The decision is based on the consistency of recent messages and new alerts with reported and estimated vehicle positions. No voting or majority decisions is needed, making our MDS resilient to Sybil attacks. After misbehavior is detected, we do not revoke all the secret credentials of misbehaving nodes, as done in most schemes. Instead, we impose fines on misbehaving nodes (administered by the certification authority), discouraging them to act selfishly. This reduces the computation and communication costs involved in revoking all the secret credentials of misbehaving nodes. © 2011 IEEE.