883 resultados para Performance evolution due time
Resumo:
Highly localized positive-energy states of the free Dirac electron are constructed and shown to evolve in a simple way under the action of Dirac's equation. When the initial uncertainty in position is small on the scale of the Compton wavelength, there is an associated uncertainty in the mean energy that is large compared with the rest mass of the electron. However, this does not lead to any breakdown of the one-particle description, associated with the possibility of pair-production, but rather leads to a rapid expansion of the probability density outwards from the point of localization, at speeds close to the speed of light.
Resumo:
Mechanisms that produce behavior which increase future survival chances provide an adaptive advantage. The flexibility of human behavior is at least partly the result of one such mechanism, our ability to travel mentally in time and entertain potential future scenarios. We can study mental time travel in children using language. Current results suggest that key developments occur between the ages of three to five. However, linguistic performance can be misleading as language itself is developing. We therefore advocate the use of methodologies that focus on future-oriented action. Mental time travel required profound changes in humans' motivational system, so that current behavior could be directed to secure not just present, but individually anticipated future needs. Such behavior should be distinguishable from behavior based on current drives, or on other mechanisms. We propose an experimental paradigm that provides subjects with an opportunity to act now to satisfy a need not currently experienced. This approach may be used to assess mental time travel in nonhuman animals. We conclude by describing a preliminary study employing an adaptation of this paradigm for children. (c) 2005 Elsevier Inc. All rights reserved.
Resumo:
Niobium pentoxide reacts actively with concentrate NaOH solution under hydrothermal conditions at as low as 120 degrees C. The reaction ruptures the corner-sharing of NbO7 decahedra and NbO6 octahedra in the reactant Nb2O5, yielding various niobates, and the structure and composition of the niobates depend on the reaction temperature and time. The morphological evolution of the solid products in the reaction at 180 degrees C is monitored via SEM: the fine Nb2O5 powder aggregates first to irregular bars, and then niobate fibers with an aspect ratio of hundreds form. The fibers are microporous molecular sieve with a monoclinic lattice, Na2Nb2O6 center dot(2)/3H2O. The fibers are a metastable intermediate of this reaction, and they completely convert to the final product NaNbO3 Cubes in the prolonged reaction of 1 h. This study demonstrates that by carefully optimizing the reaction condition, we can selectively fabricate niobate structures of high purity, including the delicate microporous fibers, through a direct reaction between concentrated NaOH solution and Nb2O5. This synthesis route is simple and suitable for the large-scale production of the fibers. The reaction first yields poorly crystallized niobates consisting of edge-sharing NbO6 octahedra, and then the microporous fibers crystallize and grow by assembling NbO6 octahedra or clusters of NbO6 octahedra and NaO6 units. Thus, the selection of the fibril or cubic product is achieved by control of reaction kinetics. Finally, niobates with different structures exhibit remarkable differences in light absorption and photoluminescence properties. Therefore, this study is of importance for developing new functional materials by the wet-chemistry process.
Resumo:
We consider a problem of robust performance analysis of linear discrete time varying systems on a bounded time interval. The system is represented in the state-space form. It is driven by a random input disturbance with imprecisely known probability distribution; this distributional uncertainty is described in terms of entropy. The worst-case performance of the system is quantified by its a-anisotropic norm. Computing the anisotropic norm is reduced to solving a set of difference Riccati and Lyapunov equations and a special form equation.
Resumo:
This paper derives the performance union bound of space-time trellis codes in orthogonal frequency division multiplexing system (STTC-OFDM) over quasi-static frequency selective fading channels based on the distance spectrum technique. The distance spectrum is the enumeration of the codeword difference measures and their multiplicities by exhausted searching through all the possible error event paths. Exhaustive search approach can be used for low memory order STTC with small frame size. However with moderate memory order STTC and moderate frame size the computational cost of exhaustive search increases exponentially, and may become impractical for high memory order STTCs. This requires advanced computational techniques such as Genetic Algorithms (GAS). In this paper, a GA with sharing function method is used to locate the multiple solutions of the distance spectrum for high memory order STTCs. Simulation evaluates the performance union bound and the complexity comparison of non-GA aided and GA aided distance spectrum techniques. It shows that the union bound give a close performance measure at high signal-to-noise ratio (SNR). It also shows that GA sharing function method based distance spectrum technique requires much less computational time as compared with exhaustive search approach but with satisfactory accuracy.
Resumo:
Oggi, i dispositivi portatili sono diventati la forza trainante del mercato consumer e nuove sfide stanno emergendo per aumentarne le prestazioni, pur mantenendo un ragionevole tempo di vita della batteria. Il dominio digitale è la miglior soluzione per realizzare funzioni di elaborazione del segnale, grazie alla scalabilità della tecnologia CMOS, che spinge verso l'integrazione a livello sub-micrometrico. Infatti, la riduzione della tensione di alimentazione introduce limitazioni severe per raggiungere un range dinamico accettabile nel dominio analogico. Minori costi, minore consumo di potenza, maggiore resa e una maggiore riconfigurabilità sono i principali vantaggi dell'elaborazione dei segnali nel dominio digitale. Da più di un decennio, diverse funzioni puramente analogiche sono state spostate nel dominio digitale. Ciò significa che i convertitori analogico-digitali (ADC) stanno diventando i componenti chiave in molti sistemi elettronici. Essi sono, infatti, il ponte tra il mondo digitale e analogico e, di conseguenza, la loro efficienza e la precisione spesso determinano le prestazioni globali del sistema. I convertitori Sigma-Delta sono il blocco chiave come interfaccia in circuiti a segnale-misto ad elevata risoluzione e basso consumo di potenza. I tools di modellazione e simulazione sono strumenti efficaci ed essenziali nel flusso di progettazione. Sebbene le simulazioni a livello transistor danno risultati più precisi ed accurati, questo metodo è estremamente lungo a causa della natura a sovracampionamento di questo tipo di convertitore. Per questo motivo i modelli comportamentali di alto livello del modulatore sono essenziali per il progettista per realizzare simulazioni veloci che consentono di identificare le specifiche necessarie al convertitore per ottenere le prestazioni richieste. Obiettivo di questa tesi è la modellazione del comportamento del modulatore Sigma-Delta, tenendo conto di diverse non idealità come le dinamiche dell'integratore e il suo rumore termico. Risultati di simulazioni a livello transistor e dati sperimentali dimostrano che il modello proposto è preciso ed accurato rispetto alle simulazioni comportamentali.
Resumo:
La ricerca intende analizzare l’efficacia della spesa pubblica, l’efficienza e le loro determinanti nei settori della Sanità, dell’Istruzione e della Ricerca per 33 paesi dell’area OCSE. L’analisi ha un duplice obiettivo: da un lato un confronto cross country e dall’altro un confronto temporale, prendendo in considerazione il periodo che va dal 1992 al 2011. Il tema della valutazione dell’efficacia e dell’efficienza della spesa pubblica è molto attuale, soprattutto in Europa, sia perché essa incide di quasi il 50% sul PIL, sia a causa della crisi finanziaria del 2008 che ha spinto i governi ad una riduzione dei bugdet e ad un loro uso più oculato. La scelta di concentrare il lavoro di analisi nei settori della Sanità, dell’Istruzione e della Ricerca e Sviluppo deriva da un lato dalla loro peculiarità di attività orientate al cliente (scuole, ospedali, tribunali) dall’altro dal ruolo strategico che essi rappresentano per lo sviluppo economico di un paese. Il lavoro è articolato in tre sezioni: 1. Rassegna dei principali strumenti metodologici utilizzati in letteratura per la misurazione della performance e dell’efficienza della spesa pubblica nei tre settori. 2. Valutazione e confronto dell’efficienza e della performance della spesa pubblica dal punto di vista sia temporale sia cross-country attraverso la costruzione di indicatori di performance e di efficienza della spesa pubblica (per approfondire l'indice dell'efficienza ho applicato la tecnica DEA "bootstrap output oriented" con indicatori di output ed input non simultanei mentre l’evoluzione dell’efficienza tra i periodi 2011-2002 e 2001-1992 è stata analizzata attraverso il calcolo dell’indice di Malmquist). 3. Analisi delle variabili esogene che influenzano l’efficienza della spesa pubblica nei settori Salute, Istruzione e Ricerca e Sviluppo attraverso una regressione Tobit avente come variabile dipendente i punteggi di efficienza DEA output oriented e come variabili esogene alcuni indicatori scelti tra quelli presenti in letteratura: l’Indicatore delle condizioni socioeconomiche delle famiglie (costruito e applicato da OCSE PISA per valutare l’impatto del background familiare nelle performance dell’apprendimento), l’Indicatore di fiducia nel sistema legislativo del paese, l’Indicatore di tutela dei diritti di proprietà, l’Indicatore delle azioni di controllo della corruzione, l’Indicatore di efficacia delle azioni di governo, l’Indicatore della qualità dei regolamenti, il PIL pro-capite. Da questo lavoro emergono risultati interessanti: non sempre alla quantità di risorse impiegate corrisponde il livello massimo di performance raggiungibile. I risultati della DEA evidenziano la media dei punteggi di efficienza corretti di 0,712 e quindi, impiegando la stessa quantità di risorse, si produrrebbe un potenziale miglioramento dell’output generato di circa il 29%. Svezia, Giappone, Finlandia e Germania risultano i paesi più efficienti, più vicini alla frontiera, mentre Slovacchia, Portogallo e Ungheria sono più distanti dalla frontiera con una misura di inefficienza di circa il 40%. Per quanto riguarda il confronto tra l’efficienza della spesa pubblica nei tre settori tra i periodi 1992-2001 e 2002-2011, l’indice di Malmquist mostra risultati interessanti: i paesi che hanno migliorato il loro livello di efficienza sono quelli dell’Est come l’Estonia, la Slovacchia, la Lituania mentre Paesi Bassi, Belgio e Stati Uniti hanno peggiorato la loro posizione. I paesi che risultano efficienti nella DEA come Finlandia, Germania e Svezia sono rimasti sostanzialmente fermi con un indice di Malmquist vicino al valore uno. In conclusione, i risultati della Tobit contengono indicazioni importanti per orientare le scelte dei Governi. Dall’analisi effettuata emerge che la fiducia nelle leggi, la lotta di contrasto alla corruzione, l’efficacia del governo, la tutela dei diritti di proprietà, le condizioni socioeconomiche delle famiglie degli studenti OECD PISA, influenzano positivamente l’efficienza della spesa pubblica nei tre settori indagati. Oltre alla spending review, per aumentare l’efficienza e migliorare la performance della spesa pubblica nei tre settori, è indispensabile per gli Stati la capacità di realizzare delle riforme che siano in grado di garantire il corretto funzionamento delle istituzioni.
Resumo:
The physical implementation of quantum information processing is one of the major challenges of current research. In the last few years, several theoretical proposals and experimental demonstrations on a small number of qubits have been carried out, but a quantum computing architecture that is straightforwardly scalable, universal, and realizable with state-of-the-art technology is still lacking. In particular, a major ultimate objective is the construction of quantum simulators, yielding massively increased computational power in simulating quantum systems. Here we investigate promising routes towards the actual realization of a quantum computer, based on spin systems. The first one employs molecular nanomagnets with a doublet ground state to encode each qubit and exploits the wide chemical tunability of these systems to obtain the proper topology of inter-qubit interactions. Indeed, recent advances in coordination chemistry allow us to arrange these qubits in chains, with tailored interactions mediated by magnetic linkers. These act as switches of the effective qubit-qubit coupling, thus enabling the implementation of one- and two-qubit gates. Molecular qubits can be controlled either by uniform magnetic pulses, either by local electric fields. We introduce here two different schemes for quantum information processing with either global or local control of the inter-qubit interaction and demonstrate the high performance of these platforms by simulating the system time evolution with state-of-the-art parameters. The second architecture we propose is based on a hybrid spin-photon qubit encoding, which exploits the best characteristic of photons, whose mobility is exploited to efficiently establish long-range entanglement, and spin systems, which ensure long coherence times. The setup consists of spin ensembles coherently coupled to single photons within superconducting coplanar waveguide resonators. The tunability of the resonators frequency is exploited as the only manipulation tool to implement a universal set of quantum gates, by bringing the photons into/out of resonance with the spin transition. The time evolution of the system subject to the pulse sequence used to implement complex quantum algorithms has been simulated by numerically integrating the master equation for the system density matrix, thus including the harmful effects of decoherence. Finally a scheme to overcome the leakage of information due to inhomogeneous broadening of the spin ensemble is pointed out. Both the proposed setups are based on state-of-the-art technological achievements. By extensive numerical experiments we show that their performance is remarkably good, even for the implementation of long sequences of gates used to simulate interesting physical models. Therefore, the here examined systems are really promising buildingblocks of future scalable architectures and can be used for proof-of-principle experiments of quantum information processing and quantum simulation.
Resumo:
The deficiencies of stationary models applied to financial time series are well documented. A special form of non-stationarity, where the underlying generator switches between (approximately) stationary regimes, seems particularly appropriate for financial markets. We use a dynamic switching (modelled by a hidden Markov model) combined with a linear dynamical system in a hybrid switching state space model (SSSM) and discuss the practical details of training such models with a variational EM algorithm due to [Ghahramani and Hilton,1998]. The performance of the SSSM is evaluated on several financial data sets and it is shown to improve on a number of existing benchmark methods.
Resumo:
Over 60% of the recurrent budget of the Ministry of Health (MoH) in Angola is spent on the operations of the fixed health care facilities (health centres plus hospitals). However, to date, no study has been attempted to investigate how efficiently those resources are used to produce health services. Therefore the objectives of this study were to assess the technical efficiency of public municipal hospitals in Angola; assess changes in productivity over time with a view to analyzing changes in efficiency and technology; and demonstrate how the results can be used in the pursuit of the public health objective of promoting efficiency in the use of health resources. The analysis was based on a 3-year panel data from all the 28 public municipal hospitals in Angola. Data Envelopment Analysis (DEA), a non-parametric linear programming approach, was employed to assess the technical and scale efficiency and productivity change over time using Malmquist index.The results show that on average, productivity of municipal hospitals in Angola increased by 4.5% over the period 2000-2002; that growth was due to improvements in efficiency rather than innovation. © 2008 Springer Science+Business Media, LLC.
Resumo:
We have used MALDI-MS imaging (MALDI-MSI) to monitor the time dependent appearance and loss of signals when tissue slices are brought rapidly to room temperature for short to medium periods of time. Sections from mouse brain were cut in a cryostat microtome, placed on a MALDI target and allowed to warm to room temperature for 30 s to 3 h. Sections were then refrozen, fixed by ethanol treatment and analysed by MALDI-MSI. The intensity of a range of markers were seen to vary across the time course, both increasing and decreasing, with the intensity of some markers changing significantly within 30 s and markers also showed tissue location specific evolution. The markers resulting from this autolysis were compared directly to those that evolved in a comparable 16 h on-tissue trypsin digest, and the markers that evolved in the two studies were seen to be substantially different. These changes offer an important additional level of location-dependent information for mapping changes and seeking disease-dependent biomarkers in the tissue. They also indicate that considerable care is required to allow comparison of biomarkers between MALDI-MSI experiments and also has implications for the standard practice of thaw-mounting multiple tissue sections onto MALDI-MS targets.