937 resultados para modelli input-output programmazione lineare grafi pesati
Resumo:
L'elaborato tratta lo studio dell’evoluzione delle perturbazioni di densità non lineari, confrontando modelli con costante cosmologica (wΛ = −1) e modelli alternativi in cui l’energia oscura sia caratterizzata da un’equazione di stato con w diverso da −1, considerando sia il caso con energia oscura costante, sia quello in cui ha fluttuazioni. La costante cosmologica presenta infatti due problemi teorici attualmente senza soluzione: il problema del suo valore e il problema della coincidenza. Per analizzare l’evoluzione delle perturbazioni di materia ed energia oscura, sia nel caso delle sovradensità che nel caso delle sottodensità primordiali, si implementano numericamente le equazioni differenziali non lineari ricavate a partire dalla teoria del collasso sferico. Per parametrizzare il problema, si fa riferimento ai valori critici del contrasto di densità δc e δv che rappresentano, rispettivamente, la soglia lineare per il collasso gravitazionale e la soglia per l’individuazione di un vuoto cosmico. I valori di δc e δv sono importanti poich´e legati agli osservabili cosmici tramite la funzione di massa e la void size function. Le soglie critiche indicate sono infatticontenute nelle funzioni citate e quindi, cambiando δc e δv al variare del modello cosmologico assunto, è possibile influenzare direttamente il numero e il tipo di oggetti cosmici formati, stimati con la funzione di massa e la void size function. Lo scopo principale è quindi quello di capire quanto l’assunzione di un modello, piuttosto che di un altro, incida sui valori di δc e δv. In questa maniera è quindi possibile stimare, con l’utilizzo della funzione di massa e della void size function, quali sono gli effetti sulla formazione delle strutture cosmiche dovuti alle variazioni delle soglie critiche δc e δv in funzione del modello cosmologico scelto. I risultati sono messi a confronto con il modello cosmologico standard (ΛCDM) per cui si assume Ω0,m = 0.3, Ω0,Λ = 0.7 e wΛ = −1.
Resumo:
La maggiore consapevolezza dell’importanza dello sfruttamento di risorse ha contribuito a favorire una transizione da un modello di economia lineare ad uno circolare, basato sulla chiusura dei cicli di risorse e l’allungamento del ciclo di vita di un prodotto o servizio. La simbiosi Industriale si colloca in questo contesto come strategia per l’ottimizzazione delle risorse e prevede il trasferimento di scarti e sottoprodotti da un’azienda output ad una input, che le utilizzerà come risorse per i loro processi. L’instaurazione di rapporti di simbiosi industriale non è così immediata e necessita di un ente esterno che faciliti la collaborazione tra le aziende e fornisca supporto tecnico per la sua implementazione. In Italia ENEA rappresenta il principale ente di riferimento per l’implementazione di progetti di simbiosi industriale e ha sviluppato un ecosistema di strumenti al fine di guidare le aziende all’interno di percorsi di simbiosi industriale. L’obiettivo di questo lavoro è quello di formulare una metodologia per la valutazione preliminare di potenziali sinergie a livello territoriale, utilizzando dati storici riguardanti sinergie e presentare uno strumento applicativo basato su di essa. La prima parte è del lavoro è dedicata ad una ricerca bibliografica sui temi della simbiosi industriale, evidenziando vantaggi e il ruolo di ENEA. Di seguito verrà illustrata la metodologia sviluppata con i suoi step operativi, il cui obiettivo è quello di valutare la presenza di potenziali sinergie a livello territoriale, partendo da un database di aziende classificate per codici Ateco e uno di sinergie già attivate in precedenza. Infine, verrà presentato lo strumento di lavoro Microsoft Excel, programmato in VBA, che permette l’applicazione della metodologia. Per mostrarne il funzionamento si utilizzano dati estratti dalla banca dati AIDA per le aziende e dati dai database MAESTI e IS-DATA per le sinergie.
Resumo:
Lo scenario televisivo degli ultimi venti anni ha registrato cambiamenti sconvolgenti. Cambiano i modelli di business, le logiche di produzione, distribuzione e di fruizione. Siamo oggi in un mondo ibrido, dove broadcast e broadband si fondono, e sul quale insistono tutti i soggetti in campo: content provider, mvpd, broadcaster free e pay, compagnie telefoniche, piattaforme ott e aggregatori di contenuti. Lo sguardo più profondo però individua nel panorama una costellazione di altre realtà importanti. Gli anni Settanta hanno visto il fiorire in modo impetuoso di tante emittenti locali che hanno fatto la storia della televisione in Italia. L’idea dell’elaborato nasce proprio dalla funzione che queste realtà svolgono oggi, per la loro rilevanza a livello territoriale, culturale e sociale. In particolare, il caso RTV38 in Toscana rappresenta in modo emblematico questo importante ruolo, bilanciato da un lato con un servizio pubblico reso alla comunità, prefigurandosi come vera e propria istituzione, e dall’altro concentrandosi sull’attività strettamente commerciale e privata.
Resumo:
Attention deficit, impulsivity and hyperactivity are the cardinal features of attention deficit hyperactivity disorder (ADHD) but executive function (EF) disorders, as problems with inhibitory control, working memory and reaction time, besides others EFs, may underlie many of the disturbs associated with the disorder. OBJECTIVE: To examine the reaction time in a computerized test in children with ADHD and normal controls. METHOD: Twenty-three boys (aged 9 to 12) with ADHD diagnosis according to Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, 2000 (DSM-IV) criteria clinical, without comorbidities, Intelligence Quotient (IQ) >89, never treated with stimulant and fifteen normal controls, age matched were investigated during performance on a voluntary attention psychophysical test. RESULTS: Children with ADHD showed reaction time higher than normal controls. CONCLUSION: A slower reaction time occurred in our patients with ADHD. This findings may be related to problems with the attentional system, that could not maintain an adequate capacity of perceptual input processes and/or in motor output processes, to respond consistently during continuous or repetitive activity.
Resumo:
PURPOSE: The main goal of this study was to develop and compare two different techniques for classification of specific types of corneal shapes when Zernike coefficients are used as inputs. A feed-forward artificial Neural Network (NN) and discriminant analysis (DA) techniques were used. METHODS: The inputs both for the NN and DA were the first 15 standard Zernike coefficients for 80 previously classified corneal elevation data files from an Eyesys System 2000 Videokeratograph (VK), installed at the Departamento de Oftalmologia of the Escola Paulista de Medicina, São Paulo. The NN had 5 output neurons which were associated with 5 typical corneal shapes: keratoconus, with-the-rule astigmatism, against-the-rule astigmatism, "regular" or "normal" shape and post-PRK. RESULTS: The NN and DA responses were statistically analyzed in terms of precision ([true positive+true negative]/total number of cases). Mean overall results for all cases for the NN and DA techniques were, respectively, 94% and 84.8%. CONCLUSION: Although we used a relatively small database, results obtained in the present study indicate that Zernike polynomials as descriptors of corneal shape may be a reliable parameter as input data for diagnostic automation of VK maps, using either NN or DA.
Resumo:
Objective: To evaluate the adhesion of the endodontic sealers Epiphany, Apexit Plus, and AH Plus to root canal dentin submitted to different surface treatments, by using the push-out test. Methods: One hundred twenty-eight root cylinders obtained from maxillary canines were embedded in acrylic resin, had the canals prepared, and were randomly assigned to four groups (n = 32), according to root dentin treatment: (I) distilled water (control), (II) 17% EDTAC, (III) 1% NaOCl and (IV) Er:YAG laser with 16-Hz, 400-mJ input (240-mJ output) and 0.32-J/cm(2) energy density. Each group was divided into four subgroups (n = 8) filled with Epiphany (either dispensed from the automix syringe supplied by the manufacturer or prepared by hand mixing), Apexit Plus, or AH Plus. Data (MPa) were analyzed by ANOVA and Tukey's test. Results: A statistically significant difference (p < 0.01) was found among the root-canal sealers, except for the Epiphany subgroups, which had statistically similar results to each other (p > 0.01): AH Plus (4.77 +/- 0.85), Epiphany/hand mixed (3.06 +/- 1.34), Epiphany/automix syringe (2.68 +/- 1.35), and Apexit Plus (1.22 +/- 0.33). A significant difference (p < 0.01) was found among the dentin surface treatments. The highest adhesion values were obtained with AH Plus when root dentin was treated with Er: YAG laser and 17% EDTAC. Epiphany sealer presented the lowest adhesion values to root dentin treated with 17% EDTAC. Conclusions: The resin-based sealers had different adhesive behaviors, depending on the treatment of root canal walls. The mode of preparation of Epiphany (automix syringe or hand mixing) did not influence sealer adhesion to root dentin.
Resumo:
In Natural Language Processing (NLP) symbolic systems, several linguistic phenomena, for instance, the thematic role relationships between sentence constituents, such as AGENT, PATIENT, and LOCATION, can be accounted for by the employment of a rule-based grammar. Another approach to NLP concerns the use of the connectionist model, which has the benefits of learning, generalization and fault tolerance, among others. A third option merges the two previous approaches into a hybrid one: a symbolic thematic theory is used to supply the connectionist network with initial knowledge. Inspired on neuroscience, it is proposed a symbolic-connectionist hybrid system called BIO theta PRED (BIOlogically plausible thematic (theta) symbolic-connectionist PREDictor), designed to reveal the thematic grid assigned to a sentence. Its connectionist architecture comprises, as input, a featural representation of the words (based on the verb/noun WordNet classification and on the classical semantic microfeature representation), and, as output, the thematic grid assigned to the sentence. BIO theta PRED is designed to ""predict"" thematic (semantic) roles assigned to words in a sentence context, employing biologically inspired training algorithm and architecture, and adopting a psycholinguistic view of thematic theory.
Resumo:
Biofuels are both a promising solution to global warming mitigation and a potential contributor to the problem. Several life cycle assessments of bioethanol have been conducted to address these questions. We performed a synthesis of the available data on Brazilian ethanol production focusing on greenhouse gas (GHG) emissions and carbon (C) sinks in the agricultural and industrial phases. Emissions of carbon dioxide (CO(2)) from fossil fuels, methane (CH(4)) and nitrous oxide (N(2)O) from sources commonly included in C footprints, such as fossil fuel usage, biomass burning, nitrogen fertilizer application, liming and litter decomposition were accounted for. In addition, black carbon (BC) emissions from burning biomass and soil C sequestration were included in the balance. Most of the annual emissions per hectare are in the agricultural phase, both in the burned system (2209 out of a total of 2398 kg C(eq)), and in the unburned system (559 out of 748 kg C(eq)). Although nitrogen fertilizer emissions are large, 111 kg C(eq) ha-1 yr-1, the largest single source of emissions is biomass burning in the manual harvest system, with a large amount of both GHG (196 kg C(eq) ha-1 yr-1). and BC (1536 kg C(eq) ha-1 yr-1). Besides avoiding emissions from biomass burning, harvesting sugarcane mechanically without burning tends to increase soil C stocks, providing a C sink of 1500 kg C ha-1 yr-1 in the 30 cm layer. The data show a C output: input ratio of 1.4 for ethanol produced under the conventionally burned and manual harvest compared with 6.5 for the mechanized harvest without burning, signifying the importance of conservation agricultural systems in bioethanol feedstock production.
Resumo:
Samogin Lopes, FA, Menegon, EM, Franchini, E, Tricoli, V, and de M. Bertuzzi, RC. Is acute static stretching able to reduce the time to exhaustion at power output corresponding to maximal oxygen uptake? J Strength Cond Res 24(6): 1650-1656, 2010-This study analyzed the effect of an acute static stretching bout on the time to exhaustion (T(lim)) at power output corresponding to (V) over dotO(2)max. Eleven physically active male subjects (age 22.3 +/- 2.8 years, (V) over dotO(2)max 2.7 +/- 0.5 L . min(-1)) completed an incremental cycle ergometer test, 2 muscle strength tests, and 2 maximal tests to exhaustion at power output corresponding to (V) over dotO(2)max with and without a previous static stretching bout. The T(lim) was not significantly affected by the static stretching (164 +/- 28 vs. 150 +/- 26 seconds with and without stretching, respectively, p = 0.09), but the time to reach (V) over dotO(2)max (118 +/- 22 vs. 102 +/- 25 seconds), blood-lactate accumulation immediately after exercise (10.7 +/- 2.9 vs. 8.0 +/- 1.7 mmol . L(-1)), and oxygen deficit (2.4 +/- 0.9 vs. 2.1 +/- 0.7 L) were significantly reduced (p <= 0.02). Thus, an acute static stretching bout did not reduce T(lim) at power output corresponding to (V) over dotO(2)max possibly by accelerating aerobic metabolism activation at the beginning of exercise. These results suggest that coaches and practitioners involved with aerobic dependent activities may use static stretching as part of their warm-up routines without fear of diminishing high-intensity aerobic exercise performance.
Resumo:
This work proposes a completely new approach for the design of resonant structures aiming at wavelength-filtering applications. The structure consists of a subwavelength metal-insulator-metal (MIM) waveguide presenting tilted coupled structures transversely arranged in the midpoint between the input and output ports. The cavity-like response of this device has shown that this concept can be particularly attractive for optical filter design for telecom applications. The extra degree of freedom provided by the tilting of the cavity has proved to be not only very effective on improving the quality factor of these structures, but also to be an elegant way of extending the range of applications for tuning multiple wavelengths, if necessary.
Resumo:
This paper discusses the integrated design of parallel manipulators, which exhibit varying dynamics. This characteristic affects the machine stability and performance. The design methodology consists of four main steps: (i) the system modeling using flexible multibody technique, (ii) the synthesis of reduced-order models suitable for control design, (iii) the systematic flexible model-based input signal design, and (iv) the evaluation of some possible machine designs. The novelty in this methodology is to take structural flexibilities into consideration during the input signal design; therefore, enhancing the standard design process which mainly considers rigid bodies dynamics. The potential of the proposed strategy is exploited for the design evaluation of a two degree-of-freedom high-speed parallel manipulator. The results are experimentally validated. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, nonlinear dynamic equations of a wheeled mobile robot are described in the state-space form where the parameters are part of the state (angular velocities of the wheels). This representation, known as quasi-linear parameter varying, is useful for control designs based on nonlinear H(infinity) approaches. Two nonlinear H(infinity) controllers that guarantee induced L(2)-norm, between input (disturbances) and output signals, bounded by an attenuation level gamma, are used to control a wheeled mobile robot. These controllers are solved via linear matrix inequalities and algebraic Riccati equation. Experimental results are presented, with a comparative study among these robust control strategies and the standard computed torque, plus proportional-derivative, controller.
Resumo:
An efficient expert system for the power transformer condition assessment is presented in this paper. Through the application of Duval`s triangle and the method of the gas ratios a first assessment of the transformer condition is obtained in the form of a dissolved gas analysis (DGA) diagnosis according IEC 60599. As a second step, a knowledge mining procedure is performed, by conducting surveys whose results are fed into a first Type-2 Fuzzy Logic System (T2-FLS), in order to initially evaluate the condition of the equipment taking only the results of dissolved gas analysis into account. The output of this first T2-FLS is used as the input of a second T2-FLS, which additionally weighs up the condition of the paper-oil system. The output of this last T2-FLS is given in terms of words easily understandable by the maintenance personnel. The proposed assessing methodology has been validated for several cases of transformers in service. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
In this work, a stable MPC that maximizes the domain of attraction of the closed-loop system is proposed. The proposed approach is suitable to real applications in the sense that it accounts for the case of output tracking, it is offset free if the output target is reachable and minimizes the offset if some of the constraints are active at steady state. The new approach is based on the definition of a Minkowski functional related to the input and terminal constraints of the stable infinite horizon MPC. It is also shown that the domain of attraction is defined by the system model and the constraints, and it does not depend on the controller tuning parameters. The proposed controller is illustrated with small order examples of the control literature. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This paper deals with the problem of tracking target sets using a model predictive control (MPC) law. Some MPC applications require a control strategy in which some system outputs are controlled within specified ranges or zones (zone control), while some other variables - possibly including input variables - are steered to fixed target or set-point. In real applications, this problem is often overcome by including and excluding an appropriate penalization for the output errors in the control cost function. In this way, throughout the continuous operation of the process, the control system keeps switching from one controller to another, and even if a stabilizing control law is developed for each of the control configurations, switching among stable controllers not necessarily produces a stable closed loop system. From a theoretical point of view, the control objective of this kind of problem can be seen as a target set (in the output space) instead of a target point, since inside the zones there are no preferences between one point or another. In this work, a stable MPC formulation for constrained linear systems, with several practical properties is developed for this scenario. The concept of distance from a point to a set is exploited to propose an additional cost term, which ensures both, recursive feasibility and local optimality. The performance of the proposed strategy is illustrated by simulation of an ill-conditioned distillation column. (C) 2010 Elsevier Ltd. All rights reserved.