887 resultados para constraint-led
Resumo:
Neste trabalho é proposto um fotômetro baseado em LED (diodo emissor de luz) para fotometria em fase sólida. O fotômetro foi desenvolvido para permitir o acoplamento da fonte de radiação (LED) e do fotodetector direto na cela de fluxo, tendo um caminho óptico de 4 mm. A cela de fluxo foi preenchida com material sólido (C18), o qual foi utilizado para imobilizar o reagente cromogênico 1-(2-tiazolilazo)-2-naftol (TAN). A exatidão foi avaliada empregando dados obtidos através da técnica ICP OES (espectrometria de emissão por plasma indutivamente acoplado). Aplicando-se o teste-t pareado não foi observada diferença significativa em nível de confiança de 95%. Outros parâmetros importantes encontrados foram faixa de resposta linear de 0,05 a 0,85 mg L-1 Zn, limite de detecção de 9 µg L-1 Zn (n = 3), desvio padrão de 1,4 % (n = 10), frequência de amostragem de 36 determinações por h, e uma geração de efluente e consumo de reagente de 1,7 mL e 0,03 µg por determinação, respectivamente.
Resumo:
Conservatism is a central theme of organismic evolution. Related species share characteristics due to their common ancestry. Some concern have been raised among evolutionary biologists, whether such conservatism is an expression of natural selection or of a constrained ability to adapt. This thesis explores adaptations and constraints within the plant reproductive phase, particularly in relation to the evolution of fleshy fruit types (berries, drupes, etc.) and the seasonal timing of flowering and fruiting. The different studies were arranged along a hierarchy of scale, with general data sets sampled among seed plants at the global scale, through more specific analyses of character evolution within the genus Rhamnus s.l. L. (Rhamnaceae), to descriptive and experimental field studies in a local population of Frangula alnus (Rhamnaceae). Apart from the field study, this thesis is mainly based on comparative methods explicitly incorporating phylogenetic relationships. The comparative study of Rhamnus s.l. species included the reconstruction of phylogenetic hypotheses based on DNA sequences. Among geographically overlapping sister clades, biotic pollination was not correlated with higher species richness when compared to wind pollinated plants. Among woody plants, clades characterized by fleshy fruit types were more species rich than their dry-fruited sister clades, suggesting that the fleshy fruit is a key innovation in woody habitats. Moreover, evolution of fleshy fruits was correlated with a change to more closed (darker) habitats. An independent contrast study within Rhamnus s.l. documented allometric relations between plant and fruit size. As a phylogenetic constraint, allometric effects must be considered weak or non-existent, though, as they did not prevail among different subclades within Rhamnus s.l. Fruit size was correlated with seed size and seed number in F. alnus. This thesis suggests that frugivore selection on fleshy fruit may be important by constraining the upper limits of fruit size, when a plant lineage is colonizing (darker) habitats where larger seed size is adaptive. Phenological correlations with fruit set, dispersal, and seed size in F. alnus, suggested that the evolution of reproductive phenology is constrained by trade-offs and partial interdependences between flowering, fruiting, dispersal, and recruitment phases. Phylogenetic constraints on the evolution of phenology were indicated by a lack of correlation between flowering time and seasonal length within Rhamnus cathartica and F. alnus, respectively. On the other hand, flowering time was correlated with seasonal length among Rhamnus s.l. species. Phenological differences between biotically and wind pollinated angiosperms also suggested adaptive change in reproductive phenology.
Resumo:
This thesis intends to investigate two aspects of Constraint Handling Rules (CHR). It proposes a compositional semantics and a technique for program transformation. CHR is a concurrent committed-choice constraint logic programming language consisting of guarded rules, which transform multi-sets of atomic formulas (constraints) into simpler ones until exhaustion [Frü06] and it belongs to the declarative languages family. It was initially designed for writing constraint solvers but it has recently also proven to be a general purpose language, being as it is Turing equivalent [SSD05a]. Compositionality is the first CHR aspect to be considered. A trace based compositional semantics for CHR was previously defined in [DGM05]. The reference operational semantics for such a compositional model was the original operational semantics for CHR which, due to the propagation rule, admits trivial non-termination. In this thesis we extend the work of [DGM05] by introducing a more refined trace based compositional semantics which also includes the history. The use of history is a well-known technique in CHR which permits us to trace the application of propagation rules and consequently it permits trivial non-termination avoidance [Abd97, DSGdlBH04]. Naturally, the reference operational semantics, of our new compositional one, uses history to avoid trivial non-termination too. Program transformation is the second CHR aspect to be considered, with particular regard to the unfolding technique. Said technique is an appealing approach which allows us to optimize a given program and in more detail to improve run-time efficiency or spaceconsumption. Essentially it consists of a sequence of syntactic program manipulations which preserve a kind of semantic equivalence called qualified answer [Frü98], between the original program and the transformed ones. The unfolding technique is one of the basic operations which is used by most program transformation systems. It consists in the replacement of a procedure-call by its definition. In CHR every conjunction of constraints can be considered as a procedure-call, every CHR rule can be considered as a procedure and the body of said rule represents the definition of the call. While there is a large body of literature on transformation and unfolding of sequential programs, very few papers have addressed this issue for concurrent languages. We define an unfolding rule, show its correctness and discuss some conditions in which it can be used to delete an unfolded rule while preserving the meaning of the original program. Finally, confluence and termination maintenance between the original and transformed programs are shown. This thesis is organized in the following manner. Chapter 1 gives some general notion about CHR. Section 1.1 outlines the history of programming languages with particular attention to CHR and related languages. Then, Section 1.2 introduces CHR using examples. Section 1.3 gives some preliminaries which will be used during the thesis. Subsequentely, Section 1.4 introduces the syntax and the operational and declarative semantics for the first CHR language proposed. Finally, the methodologies to solve the problem of trivial non-termination related to propagation rules are discussed in Section 1.5. Chapter 2 introduces a compositional semantics for CHR where the propagation rules are considered. In particular, Section 2.1 contains the definition of the semantics. Hence, Section 2.2 presents the compositionality results. Afterwards Section 2.3 expounds upon the correctness results. Chapter 3 presents a particular program transformation known as unfolding. This transformation needs a particular syntax called annotated which is introduced in Section 3.1 and its related modified operational semantics !0t is presented in Section 3.2. Subsequently, Section 3.3 defines the unfolding rule and prove its correctness. Then, in Section 3.4 the problems related to the replacement of a rule by its unfolded version are discussed and this in turn gives a correctness condition which holds for a specific class of rules. Section 3.5 proves that confluence and termination are preserved by the program modifications introduced. Finally, Chapter 4 concludes by discussing related works and directions for future work.
Resumo:
Nel lavoro di tesi qui presentato si indaga l'applicazione di tecniche di apprendimento mirate ad una più efficiente esecuzione di un portfolio di risolutore di vincoli (constraint solver). Un constraint solver è un programma che dato in input un problema di vincoli, elabora una soluzione mediante l'utilizzo di svariate tecniche. I problemi di vincoli sono altamente presenti nella vita reale. Esempi come l'organizzazione dei viaggi dei treni oppure la programmazione degli equipaggi di una compagnia aerea, sono tutti problemi di vincoli. Un problema di vincoli è formalizzato da un problema di soddisfacimento di vincoli(CSP). Un CSP è descritto da un insieme di variabili che possono assumere valori appartenenti ad uno specico dominio ed un insieme di vincoli che mettono in relazione variabili e valori assumibili da esse. Una tecnica per ottimizzare la risoluzione di tali problemi è quella suggerita da un approccio a portfolio. Tale tecnica, usata anche in am- biti come quelli economici, prevede la combinazione di più solver i quali assieme possono generare risultati migliori di un approccio a singolo solver. In questo lavoro ci preoccupiamo di creare una nuova tecnica che combina un portfolio di constraint solver con tecniche di machine learning. Il machine learning è un campo di intelligenza articiale che si pone l'obiettivo di immettere nelle macchine una sorta di `intelligenza'. Un esempio applicativo potrebbe essere quello di valutare i casi passati di un problema ed usarli in futuro per fare scelte. Tale processo è riscontrato anche a livello cognitivo umano. Nello specico, vogliamo ragionare in termini di classicazione. Una classicazione corrisponde ad assegnare ad un insieme di caratteristiche in input, un valore discreto in output, come vero o falso se una mail è classicata come spam o meno. La fase di apprendimento sarà svolta utilizzando una parte di CPHydra, un portfolio di constraint solver sviluppato presso la University College of Cork (UCC). Di tale algoritmo a portfolio verranno utilizzate solamente le caratteristiche usate per descrivere determinati aspetti di un CSP rispetto ad un altro; queste caratteristiche vengono altresì dette features. Creeremo quindi una serie di classicatori basati sullo specifico comportamento dei solver. La combinazione di tali classicatori con l'approccio a portfolio sara nalizzata allo scopo di valutare che le feature di CPHydra siano buone e che i classicatori basati su tali feature siano affidabili. Per giusticare il primo risultato, eettueremo un confronto con uno dei migliori portfolio allo stato dell'arte, SATzilla. Una volta stabilita la bontà delle features utilizzate per le classicazioni, andremo a risolvere i problemi simulando uno scheduler. Tali simulazioni testeranno diverse regole costruite con classicatori precedentemente introdotti. Prima agiremo su uno scenario ad un processore e successivamente ci espanderemo ad uno scenario multi processore. In questi esperimenti andremo a vericare che, le prestazioni ottenute tramite l'applicazione delle regole create appositamente sui classicatori, abbiano risultati migliori rispetto ad un'esecuzione limitata all'utilizzo del migliore solver del portfolio. I lavoro di tesi è stato svolto in collaborazione con il centro di ricerca 4C presso University College Cork. Su questo lavoro è stato elaborato e sottomesso un articolo scientico alla International Joint Conference of Articial Intelligence (IJCAI) 2011. Al momento della consegna della tesi non siamo ancora stati informati dell'accettazione di tale articolo. Comunque, le risposte dei revisori hanno indicato che tale metodo presentato risulta interessante.
Resumo:
Il lavoro presentato in questa tesi si colloca nel contesto della programmazione con vincoli, un paradigma per modellare e risolvere problemi di ricerca combinatoria che richiedono di trovare soluzioni in presenza di vincoli. Una vasta parte di questi problemi trova naturale formulazione attraverso il linguaggio delle variabili insiemistiche. Dal momento che il dominio di tali variabili può essere esponenziale nel numero di elementi, una rappresentazione esplicita è spesso non praticabile. Recenti studi si sono quindi focalizzati nel trovare modi efficienti per rappresentare tali variabili. Pertanto si è soliti rappresentare questi domini mediante l'uso di approssimazioni definite tramite intervalli (d'ora in poi rappresentazioni), specificati da un limite inferiore e un limite superiore secondo un'appropriata relazione d'ordine. La recente evoluzione della ricerca sulla programmazione con vincoli sugli insiemi ha chiaramente indicato che la combinazione di diverse rappresentazioni permette di raggiungere prestazioni di ordini di grandezza superiori rispetto alle tradizionali tecniche di codifica. Numerose proposte sono state fatte volgendosi in questa direzione. Questi lavori si differenziano su come è mantenuta la coerenza tra le diverse rappresentazioni e su come i vincoli vengono propagati al fine di ridurre lo spazio di ricerca. Sfortunatamente non esiste alcun strumento formale per paragonare queste combinazioni. Il principale obiettivo di questo lavoro è quello di fornire tale strumento, nel quale definiamo precisamente la nozione di combinazione di rappresentazioni facendo emergere gli aspetti comuni che hanno caratterizzato i lavori precedenti. In particolare identifichiamo due tipi possibili di combinazioni, una forte ed una debole, definendo le nozioni di coerenza agli estremi sui vincoli e sincronizzazione tra rappresentazioni. Il nostro studio propone alcune interessanti intuizioni sulle combinazioni esistenti, evidenziandone i limiti e svelando alcune sorprese. Inoltre forniamo un'analisi di complessità della sincronizzazione tra minlex, una rappresentazione in grado di propagare in maniera ottimale vincoli lessicografici, e le principali rappresentazioni esistenti.
Resumo:
This work presents hybrid Constraint Programming (CP) and metaheuristic methods for the solution of Large Scale Optimization Problems; it aims at integrating concepts and mechanisms from the metaheuristic methods to a CP-based tree search environment in order to exploit the advantages of both approaches. The modeling and solution of large scale combinatorial optimization problem is a topic which has arisen the interest of many researcherers in the Operations Research field; combinatorial optimization problems are widely spread in everyday life and the need of solving difficult problems is more and more urgent. Metaheuristic techniques have been developed in the last decades to effectively handle the approximate solution of combinatorial optimization problems; we will examine metaheuristics in detail, focusing on the common aspects of different techniques. Each metaheuristic approach possesses its own peculiarities in designing and guiding the solution process; our work aims at recognizing components which can be extracted from metaheuristic methods and re-used in different contexts. In particular we focus on the possibility of porting metaheuristic elements to constraint programming based environments, as constraint programming is able to deal with feasibility issues of optimization problems in a very effective manner. Moreover, CP offers a general paradigm which allows to easily model any type of problem and solve it with a problem-independent framework, differently from local search and metaheuristic methods which are highly problem specific. In this work we describe the implementation of the Local Branching framework, originally developed for Mixed Integer Programming, in a CP-based environment. Constraint programming specific features are used to ease the search process, still mantaining an absolute generality of the approach. We also propose a search strategy called Sliced Neighborhood Search, SNS, that iteratively explores slices of large neighborhoods of an incumbent solution by performing CP-based tree search and encloses concepts from metaheuristic techniques. SNS can be used as a stand alone search strategy, but it can alternatively be embedded in existing strategies as intensification and diversification mechanism. In particular we show its integration within the CP-based local branching. We provide an extensive experimental evaluation of the proposed approaches on instances of the Asymmetric Traveling Salesman Problem and of the Asymmetric Traveling Salesman Problem with Time Windows. The proposed approaches achieve good results on practical size problem, thus demonstrating the benefit of integrating metaheuristic concepts in CP-based frameworks.
Resumo:
In questa tesi viene descritto il funzionamento delle sorgenti di luce LED (Light Emitting Diode) a confinamento quantico, che rappresentano la nuova frontiera dell'illuminazione ad alta efficienza e durata. Nei capitoli introduttivi è descritta brevemente la storia dei LEDs dalla loro invenzione agli sviluppi più recenti. Il funzionamento di tali dispositivi fotonici è spiegato a partire dal concetto di sorgente di luce per elettroluminescenza, con particolare riferimento alle eterostrutture a confinamento quantico bidimensionale (quantum wells). I capitoli centrali riguardano i nitruri dei gruppi III-V, le cui caratteristiche e proprietà hanno permesso di fabbricare LEDs ad alta efficienza e ampio spettro di emissione, soprattutto in relazione al fatto che i LEDs a nitruri dei gruppi III-V emettono luce anche in presenza di alte densità di difetti estesi, nello specifico dislocazioni. I capitoli successivi sono dedicati alla presentazione del lavoro sperimentale svolto, che riguarda la caratterizzazione elettrica, ottica e strutturale di LEDs a confinamento quantico basati su nitruri del gruppo III-V GaN e InGaN, cresciuti nei laboratori di Cambridge dal Center for Gallium Nitride. Lo studio ha come obiettivo finale il confronto dei risultati ottenuti su LEDs con la medesima struttura epitassiale, ma differente densità di dislocazioni, allo scopo di comprendere meglio il ruolo che tali difetti estesi ricoprono nella determinazione dell'effcienza delle sorgenti di luce LED. L’ultimo capitolo riguarda la diffrazione a raggi X dal punto di vista teorico, con particolare attenzione ai metodi di valutazioni dello strain reticolare nei wafer a nitruri, dal quale dipende la densità di dislocazioni.
Resumo:
This work presents exact algorithms for the Resource Allocation and Cyclic Scheduling Problems (RA&CSPs). Cyclic Scheduling Problems arise in a number of application areas, such as in hoist scheduling, mass production, compiler design (implementing scheduling loops on parallel architectures), software pipelining, and in embedded system design. The RA&CS problem concerns time and resource assignment to a set of activities, to be indefinitely repeated, subject to precedence and resource capacity constraints. In this work we present two constraint programming frameworks facing two different types of cyclic problems. In first instance, we consider the disjunctive RA&CSP, where the allocation problem considers unary resources. Instances are described through the Synchronous Data-flow (SDF) Model of Computation. The key problem of finding a maximum-throughput allocation and scheduling of Synchronous Data-Flow graphs onto a multi-core architecture is NP-hard and has been traditionally solved by means of heuristic (incomplete) algorithms. We propose an exact (complete) algorithm for the computation of a maximum-throughput mapping of applications specified as SDFG onto multi-core architectures. Results show that the approach can handle realistic instances in terms of size and complexity. Next, we tackle the Cyclic Resource-Constrained Scheduling Problem (i.e. CRCSP). We propose a Constraint Programming approach based on modular arithmetic: in particular, we introduce a modular precedence constraint and a global cumulative constraint along with their filtering algorithms. Many traditional approaches to cyclic scheduling operate by fixing the period value and then solving a linear problem in a generate-and-test fashion. Conversely, our technique is based on a non-linear model and tackles the problem as a whole: the period value is inferred from the scheduling decisions. The proposed approaches have been tested on a number of non-trivial synthetic instances and on a set of realistic industrial instances achieving good results on practical size problem.
Resumo:
Neben astronomischen Beobachtungen mittels boden- und satellitengestützer Instrumente existiert ein weiterer experimenteller Zugang zu astrophysikalischen Fragestellungen in Form einer Auswahl extraterrestrischen Materials, das für Laboruntersuchungen zur Verfügung steht. Hierzu zählen interplanetare Staubpartikel, Proben, die von Raumfahrzeugen zur Erde zurückgebracht wurden und primitive Meteorite. Von besonderem Interesse sind sog. primitive kohlige Chondrite, eine Klasse von Meteoriten, die seit ihrer Entstehung im frühen Sonnensystem kaum verändert wurden. Sie enthalten neben frühem solarem Material präsolare Minerale, die in Sternwinden von Supernovae und roten Riesensternen kondensiert sind und die Bildung unseres Sonnensystems weitgehend unverändert überstanden haben. Strukturelle, chemische und isotopische Analysen dieser Proben besitzen demnach eine große Relevanz für eine Vielzahl astrophysikalischer Forschungsgebiete. Im Rahmen der vorliegenden Arbeit wurden Laboranalysen mittels modernster physikalischer Methoden an Bestandteilen primitiver Meteorite durchgeführt. Aufgrund der Vielfalt der zu untersuchenden Eigenschaften und der geringen Größen der analysierten Partikel zwischen wenigen Nanometern und einigen Mikrometern mussten hierbei hohe Anforderungen an Nachweiseffizienz und Ortsauflösung gestellt werden. Durch die Kombination verschiedener Methoden wurde ein neuer methodologischer Ansatz zur Analyse präsolarer Minerale (beispielsweise SiC) entwickelt. Aufgrund geringer Mengen verfügbaren Materials basiert dieses Konzept auf der parallelen nichtdestruktiven Vorcharakterisierung einer Vielzahl präsolarer Partikel im Hinblick auf ihren Gehalt diagnostischer Spurenelemente. Eine anschließende massenspektrometrische Untersuchung identifizierter Partikel mit hohen Konzentrationen interessanter Elemente ist in der Lage, Informationen zu nukleosynthetischen Bedingungen in ihren stellaren Quellen zu liefern. Weiterhin wurden Analysen meteoritischer Nanodiamanten durchgeführt, deren geringe Größen von wenigen Nanometern zu stark modifizierten Festkörpereigenschaften führen. Im Rahmen dieser Arbeit wurde eine quantitative Beschreibung von Quanteneinschluss-Effekten entwickelt, wie sie in diesen größenverteilten Halbleiter-Nanopartikeln auftreten. Die abgeleiteten Ergebnisse besitzen Relevanz für nanotechnologische Forschungen. Den Kern der vorliegenden Arbeit bilden Untersuchungen an frühen solaren Partikeln, sog. refraktären Metall Nuggets (RMN). Mit Hilfe struktureller, chemischer und isotopischer Analysen, sowie dem Vergleich der Ergebnisse mit thermodynamischen Rechnungen, konnte zum ersten Mal ein direkter Nachweis von Kondensationsprozessen im frühen solaren Nebel erbracht werden. Die analysierten RMN gehören zu den ersten Festkörperkondensaten, die im frühen Sonnensystem gebildet wurden und scheinen seit ihrer Entstehung nicht durch sekundäre Prozesse verändert worden zu sein. Weiterhin konnte erstmals die Abkühlrate des Gases des lokalen solaren Nebels, in dem die ersten Kondensationsprozesse stattfanden, zu 0.5 K/Jahr bestimmt werden, wodurch ein detaillierter Blick in die thermodynamische Geschichte des frühen Sonnensystems möglich wird. Die extrahierten Parameter haben weitreichende Auswirkungen auf die Modelle der Entstehung erster solarer Festkörper, welche die Grundbausteine der Planetenbildung darstellen.
Resumo:
Recent research has shown that the performance of a single, arbitrarily efficient algorithm can be significantly outperformed by using a portfolio of —possibly on-average slower— algorithms. Within the Constraint Programming (CP) context, a portfolio solver can be seen as a particular constraint solver that exploits the synergy between the constituent solvers of its portfolio for predicting which is (or which are) the best solver(s) to run for solving a new, unseen instance. In this thesis we examine the benefits of portfolio solvers in CP. Despite portfolio approaches have been extensively studied for Boolean Satisfiability (SAT) problems, in the more general CP field these techniques have been only marginally studied and used. We conducted this work through the investigation, the analysis and the construction of several portfolio approaches for solving both satisfaction and optimization problems. We focused in particular on sequential approaches, i.e., single-threaded portfolio solvers always running on the same core. We started from a first empirical evaluation on portfolio approaches for solving Constraint Satisfaction Problems (CSPs), and then we improved on it by introducing new data, solvers, features, algorithms, and tools. Afterwards, we addressed the more general Constraint Optimization Problems (COPs) by implementing and testing a number of models for dealing with COP portfolio solvers. Finally, we have come full circle by developing sunny-cp: a sequential CP portfolio solver that turned out to be competitive also in the MiniZinc Challenge, the reference competition for CP solvers.
Resumo:
In this thesis we dealt with the problem of describing a transportation network in which the objects in movement were subject to both finite transportation capacity and finite accomodation capacity. The movements across such a system are realistically of a simultaneous nature which poses some challenges when formulating a mathematical description. We tried to derive such a general modellization from one posed on a simplified problem based on asyncronicity in particle transitions. We did so considering one-step processes based on the assumption that the system could be describable through discrete time Markov processes with finite state space. After describing the pre-established dynamics in terms of master equations we determined stationary states for the considered processes. Numerical simulations then led to the conclusion that a general system naturally evolves toward a congestion state when its particle transition simultaneously and we consider one single constraint in the form of network node capacity. Moreover the congested nodes of a system tend to be located in adjacent spots in the network, thus forming local clusters of congested nodes.
Resumo:
Background: Medication-related problems are common in the growing population of older adults and inappropriate prescribing is a preventable risk factor. Explicit criteria such as the Beers criteria provide a valid instrument for describing the rate of inappropriate medication (IM) prescriptions among older adults. Objective: To reduce IM prescriptions based on explicit Beers criteria using a nurse-led intervention in a nursing-home (NH) setting. Study Design: The pre/post-design included IM assessment at study start (pre-intervention), a 4-month intervention period, IM assessment after the intervention period (post-intervention) and a further IM assessment at 1-year follow-up. Setting: 204-bed inpatient NH in Bern, Switzerland. Participants: NH residents aged ≥60 years. Intervention: The intervention included four key intervention elements: (i) adaptation of Beers criteria to the Swiss setting; (ii) IM identification; (iii) IM discontinuation; and (iv) staff training. Main Outcome Measure: IM prescription at study start, after the 4-month intervention period and at 1-year follow-up. Results: The mean±SD resident age was 80.3±8.8 years. Residents were prescribed a mean±SD 7.8±4.0 medications. The prescription rate of IMs decreased from 14.5% pre-intervention to 2.8% post-intervention (relative risk [RR] = 0.2; 95% CI 0.06, 0.5). The risk of IM prescription increased nonstatistically significantly in the 1-year follow-up period compared with post-intervention (RR = 1.6; 95% CI 0.5, 6.1). Conclusions: This intervention to reduce IM prescriptions based on explicit Beers criteria was feasible, easy to implement in an NH setting, and resulted in a substantial decrease in IMs. These results underscore the importance of involving nursing staff in the medication prescription process in a long-term care setting.
Resumo:
AIM: The purpose of this study was to evaluate the activation of resin-modified glass ionomer restorative material (RMGI, Vitremer-3M-ESPE, A3) by halogen lamp (QTH) or light-emitting diode (LED) by Knoop microhardness (KHN) in two storage conditions: 24hrs and 6 months and in two depths (0 and 2 mm). MATERIALS AND METHODS: The specimens were randomly divided into 3 experimental groups (n=10) according to activation form and evaluated in depth after 24h and after 6 months of storage. Activation was performed with QTH for 40s (700 mW/cm2) and for 40 or 20 s with LED (1,200 mW/scm2). After 24 hrs and 6 months of storage at 37°C in relative humidity in lightproof container, the Knoop microhardness test was performed. Statistics Data were analysed by three-way ANOVA and Tukey post-tests (p<0.05). RESULTS: All evaluated factors showed significant differences (p<0.05). After 24 hrs there were no differences within the experimental groups. KHN at 0 mm was significantly higher than 2 mm. After 6 months, there was an increase of microhardness values for all groups, being the ones activated by LED higher than the ones activated by QTH. CONCLUSION: Light-activation with LED positively influenced the KHN for RMGI evaluated after 6 months.
Resumo:
Model based calibration has gained popularity in recent years as a method to optimize increasingly complex engine systems. However virtually all model based techniques are applied to steady state calibration. Transient calibration is by and large an emerging technology. An important piece of any transient calibration process is the ability to constrain the optimizer to treat the problem as a dynamic one and not as a quasi-static process. The optimized air-handling parameters corresponding to any instant of time must be achievable in a transient sense; this in turn depends on the trajectory of the same parameters over previous time instances. In this work dynamic constraint models have been proposed to translate commanded to actually achieved air-handling parameters. These models enable the optimization to be realistic in a transient sense. The air handling system has been treated as a linear second order system with PD control. Parameters for this second order system have been extracted from real transient data. The model has been shown to be the best choice relative to a list of appropriate candidates such as neural networks and first order models. The selected second order model was used in conjunction with transient emission models to predict emissions over the FTP cycle. It has been shown that emission predictions based on air-handing parameters predicted by the dynamic constraint model do not differ significantly from corresponding emissions based on measured air-handling parameters.