890 resultados para Scale validation process


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Five pilot-scale steam explosion pretreatments of sugarcane bagasse followed by alkaline delignification were explored. The solubilised lignin was precipitated with 98% sulphuric acid. Most of the pentosan (82.6%), and the acetyl group fractions were solubilised during pretreatment, while 90.2% of cellulose and 87.0% lignin were recovered in the solid fraction. Approximately 91% of the lignin and 72.5% of the pentosans contained in the steam-exploded solids were solubilised by delignification, resulting in a pulp with almost 90% of cellulose. The acidification of the black liquors allowed recovery of 48.3% of the lignin contained in the raw material. Around 14% of lignin, 22% of cellulose and 26% of pentosans were lost during the process. In order to increase material recovery, major changes, such as introduction of efficient condensers and the reduction in the number of washing steps, should be done in the process setup. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Solar reactors can be attractive in photodegradation processes due to lower electrical energy demand. The performance of a solar reactor for two flow configurations, i.e., plug flow and mixed flow, is compared based on experimental results with a pilot-scale solar reactor. Aqueous solutions of phenol were used as a model for industrial wastewater containing organic contaminants. Batch experiments were carried out under clear sky, resulting in removal rates in the range of 96100?%. The dissolved organic carbon removal rate was simulated by an empirical model based on neural networks, which was adjusted to the experimental data, resulting in a correlation coefficient of 0.9856. This approach enabled to estimate effects of process variables which could not be evaluated from the experiments. Simulations with different reactor configurations indicated relevant aspects for the design of solar reactors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim. The aim of this study was to evaluate the internal reliability and validity of the BrazilianPortuguese version of Duke Anticoagulation Satisfaction Scale (DASS) among cardiovascular patients. Background. Oral anticoagulation is widely used to prevent and treat thromboembolic events in several conditions, especially in cardiovascular diseases; however, this therapy can induce dissatisfaction and reduce the quality of life. Design. Methodological and cross-sectional research design. Methods. The cultural adaptation of the DASS included the translation and back-translation, discussions with healthcare professionals and patients to ensure conceptual equivalence, semantic evaluation and instrument pretest. The BrazilianPortuguese version of the DASS was tested among subjects followed in a university hospital anticoagulation outpatient clinic. The psychometric properties were assessed by construct validity (convergent, known groups and dimensionality) and internal consistency/reliability (Cronbachs alpha). Results. A total of 180 subjects under oral anticoagulation formed the baseline validation population. DASS total score and SF-36 domain correlations were moderate for General health (r = -0.47, p < 0.01), Vitality (r = -0.44, p < 0.01) and Mental health (r = -0.42, p < 0.01) (convergent). Age and length on oral anticoagulation therapy (in years) were weakly correlated with total DASS score and most of the subscales, except Limitation (r = -0.375, p < 0.01) (Known groups). The Cronbachs alpha coefficient was 0.79 for the total scale, and it ranged from 0.76 (hassles and burdens)0.46 (psychological impact) among the domains, confirming the internal consistency reliability. Conclusions. The BrazilianPortuguese version of the DASS has shown levels of reliability and validity comparable with the original English version. Relevance to clinical practice. Healthcare practitioners and researchers need internationally validated measurement tools to compare outcomes of interventions in clinical management and research tools in oral anticoagulation therapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background The responsiveness of oral health-related quality of life (OHRQoL) instruments has become relevant, given the increasing tendency to use OHRQoL measures as outcomes in clinical trials and evaluations studies. The purpose of this study was to assess the responsiveness of the Brazilian Scale of Oral Health Outcomes for 5-year-old children (SOHO-5) to dental treatment. Methods One hundred and fifty-four children and their parents completed the child self- and parental’ reports of the SOHO-5 prior to treatment and 7 to 14 days after the completion of treatment. The post-treatment questionnaire also included a global transition judgment that assessed subject’s perceptions of change in their oral health following treatment. Change scores were calculated by subtracting post-treatment SOHO-5 scores from pre-treatment scores. Longitudinal construct validity was assessed by using one-way analysis of variance to examine the association between change scores and the global transition judgments. Measures of responsiveness included standardized effect sizes (ES) and standardized response mean (SRM). Results The improvement of children’s oral health after treatment are reflected in mean pre- and post-treatment SOHO-5 scores that declined from 2.67 to 0.61 (p < 0.001) for the child-self reports, and 4.04 to 0.71 (p < 0.001) for the parental reports. Mean change scores showed a gradient in the expected direction across categories of the global transition judgment, and there were significant differences in the pre- and post-treatment scores of those who reported improving a little (p < 0.05) and those who reported improving a lot (p < 0.001). For both versions, the ES and SRM based on change scores mean for total scores and for categories of global transitions judgments were moderate to large. Conclusions The Brazilian SOHO-5 is responsive to change and can be used as an outcome indicator in future clinical trials. Both the parental and the child versions presented satisfactory results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Most of the instruments available to measure the oral health-related quality of life (OHRQoL) in paediatric populations focus on older children, whereas parental reports are used for very young children. The scale of oral health outcomes for 5-year-old children (SOHO-5) assesses the OHRQoL of very young children through self-reports and parental proxy reports. We aimed to cross-culturally adapt the SOHO-5 to the Brazilian Portuguese language and to assess its reliability and validity. Findings We tested the quality of the cross-cultural adaptation in 2 pilot studies with 40 children aged 5–6 years and their parents. The measurement was tested for reliability and validity on 193 children that attended the paediatric dental screening program at the University of São Paulo. The children were also clinically examined for dental caries. The internal consistency was demonstrated by a Cronbach's alpha coefficient of 0.90 for the children’s self-reports and 0.77 for the parental proxy reports. The test-retest reliability results, which were based on repeated administrations on 159 children, were excellent; the intraclass correlation coefficient was 0.98 for parental and 0.92 for child reports. In general, the construct validity was satisfactory and demonstrated consistent and strong associations between the SOHO-5 and different subjective global ratings of oral health, perceived dental treatment need and overall well-being in both the parental and children’s versions (p < 0.001). The SOHO-5 was also able to clearly discriminate between children with and without a history of dental caries (mean scores: 5.8 and 1.1, respectively; p < 0.001). Conclusion The present study demonstrated that the SOHO-5 exhibits satisfactory psychometric properties and is applicable to 5- to 6-year-old children in Brazil.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Backgroud: It has been shown that different symptoms or symptom combinations of neuropathic pain (NeP) may correspond to different mechanistic backgrounds and respond differently to treatment. The Neuropathic Pain Symptom Inventory (NPSI) is able to detect distinct clusters of symptoms (i.e. dimensions) with a putative common mechanistic background. The present study described the psychometric validation of the Portuguese version (PV) of the NPSI. Methods: Patients were seen in two consecutive visits, three to four weeks apart. They were asked to: (i) rate their mean pain intensity in the last 24 hours on an 11-point (0-10) numerical scale; (ii) complete the PV-NPSI; (iii) provide the list of pain medications and doses currently in use. VAS and Global Impression of Change (GIC) were filled out in the second visit. Results: PV-NPSI underwent test-retest reliability, factor analysis, analysis of sensitivity to changes between both visits. The PV-NPSI was reliable in this setting, with a good intra-class correlation for all items. The factorial analysis showed that the PV-NPSI inventory assessed different components of neuropathic pain. Five different factors were found. The PV-NPSI was adequate to evaluate patients with neuropathic pain and to detect clusters of NeP symptoms. Conclusions: The psychometric properties of the PV-NPSI rendered it adequate to evaluate patients with both central and peripheral neuropathic pain syndromes and to detect clusters of NeP symptoms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este estudo objetivou adaptar culturalmente e analisar as propriedades psicométricas da versão brasileira da Underwood's Daily Spiritual Experience Scale (DSES). A adaptação seguiu as etapas internacionalmente recomendadas e a versão adaptada manteve equivalência com a original, após ajustes na redação de cinco itens. Na aplicação a 179 pacientes médico-cirúrgicos mostrou evidências de consistência interna (alfa de Cronbach=0,91), estabilidade temporal (ICC=0,94 no teste e reteste) e validade de construto convergente, na correlação com a subescala Religiosidade Intrínseca do instrumento DUREL (r=0,56; p<0,001). A análise fatorial exploratória extraiu três componentes, explicando 60,5% da variância do total. A versão brasileira da DSES apresenta evidências de confiabilidade e validade junto a pacientes hospitalizados. São necessários mais estudos para confirmar a sua composição fatorial e testar a sua aplicabilidade em diferentes populações.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Amazon basin is a region of constant scientific interest due to its environmental importance and its biodiversity and climate on a global scale. The seasonal variations in water volume are one of the examples of topics studied nowadays. In general, the variations in river levels depend primarily on the climate and physics characteristics of the corresponding basins. The main factor which influences the water level in the Amazon Basin is the intensive rainfall over this region as a consequence of the humidity of the tropical climate. Unfortunately, the Amazon basin is an area with lack of water level information due to difficulties in access for local operations. The purpose of this study is to compare and evaluate the Equivalent Water Height (Ewh) from GRACE (Gravity Recovery And Climate Experiment) mission, to study the connection between water loading and vertical variations of the crust due to the hydrologic. In order to achieve this goal, the Ewh is compared with in-situ information from limnimeter. For the analysis it was computed the correlation coefficients, phase and amplitude of GRACE Ewh solutions and in-situ data, as well as the timing of periods of drought in different parts of the basin. The results indicated that vertical variations of the lithosphere due to water mass loading could reach 7 to 5 cm per year, in the sedimentary and flooded areas of the region, where water level variations can reach 10 to 8 m.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The continuous increase of genome sequencing projects produced a huge amount of data in the last 10 years: currently more than 600 prokaryotic and 80 eukaryotic genomes are fully sequenced and publically available. However the sole sequencing process of a genome is able to determine just raw nucleotide sequences. This is only the first step of the genome annotation process that will deal with the issue of assigning biological information to each sequence. The annotation process is done at each different level of the biological information processing mechanism, from DNA to protein, and cannot be accomplished only by in vitro analysis procedures resulting extremely expensive and time consuming when applied at a this large scale level. Thus, in silico methods need to be used to accomplish the task. The aim of this work was the implementation of predictive computational methods to allow a fast, reliable, and automated annotation of genomes and proteins starting from aminoacidic sequences. The first part of the work was focused on the implementation of a new machine learning based method for the prediction of the subcellular localization of soluble eukaryotic proteins. The method is called BaCelLo, and was developed in 2006. The main peculiarity of the method is to be independent from biases present in the training dataset, which causes the over‐prediction of the most represented examples in all the other available predictors developed so far. This important result was achieved by a modification, made by myself, to the standard Support Vector Machine (SVM) algorithm with the creation of the so called Balanced SVM. BaCelLo is able to predict the most important subcellular localizations in eukaryotic cells and three, kingdom‐specific, predictors were implemented. In two extensive comparisons, carried out in 2006 and 2008, BaCelLo reported to outperform all the currently available state‐of‐the‐art methods for this prediction task. BaCelLo was subsequently used to completely annotate 5 eukaryotic genomes, by integrating it in a pipeline of predictors developed at the Bologna Biocomputing group by Dr. Pier Luigi Martelli and Dr. Piero Fariselli. An online database, called eSLDB, was developed by integrating, for each aminoacidic sequence extracted from the genome, the predicted subcellular localization merged with experimental and similarity‐based annotations. In the second part of the work a new, machine learning based, method was implemented for the prediction of GPI‐anchored proteins. Basically the method is able to efficiently predict from the raw aminoacidic sequence both the presence of the GPI‐anchor (by means of an SVM), and the position in the sequence of the post‐translational modification event, the so called ω‐site (by means of an Hidden Markov Model (HMM)). The method is called GPIPE and reported to greatly enhance the prediction performances of GPI‐anchored proteins over all the previously developed methods. GPIPE was able to predict up to 88% of the experimentally annotated GPI‐anchored proteins by maintaining a rate of false positive prediction as low as 0.1%. GPIPE was used to completely annotate 81 eukaryotic genomes, and more than 15000 putative GPI‐anchored proteins were predicted, 561 of which are found in H. sapiens. In average 1% of a proteome is predicted as GPI‐anchored. A statistical analysis was performed onto the composition of the regions surrounding the ω‐site that allowed the definition of specific aminoacidic abundances in the different considered regions. Furthermore the hypothesis that compositional biases are present among the four major eukaryotic kingdoms, proposed in literature, was tested and rejected. All the developed predictors and databases are freely available at: BaCelLo http://gpcr.biocomp.unibo.it/bacello eSLDB http://gpcr.biocomp.unibo.it/esldb GPIPE http://gpcr.biocomp.unibo.it/gpipe

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Compared with other mature engineering disciplines, fracture mechanics of concrete is still a developing field and very important for structures like bridges subject to dynamic loading. An historical point of view of what done in the field is provided and then the project is presented. The project presents an application of the Digital Image Correlation (DIC) technique for the detection of cracks at the surface of concrete prisms (500mmx100mmx100mm) subject to flexural loading conditions (Four Point Bending test). The technique provide displacement measurements of the region of interest and from this displacement field information about crack mouth opening (CMOD) are obtained and related to the applied load. The evolution of the fracture process is shown through graphs and graphical maps of the displacement at some step of the loading process. The study shows that it is possible with the DIC system to detect the appearance and evolution of cracks, even before the cracks become visually detectable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work is a detailed study of hydrodynamic processes in a defined area, the littoral in front of the Venice Lagoon and its inlets, which are complex morphological areas of interconnection. A finite element hydrodynamic model of the Venice Lagoon and the Adriatic Sea has been developed in order to study the coastal current patterns and the exchanges at the inlets of the Venice Lagoon. This is the first work in this area that tries to model the interaction dynamics, running together a model for the lagoon and the Adriatic Sea. First the barotropic processes near the inlets of the Venice Lagoon have been studied. Data from more than ten tide gauges displaced in the Adriatic Sea have been used in the calibration of the simulated water levels. To validate the model results, empirical flux data measured by ADCP probes installed inside the inlets of Lido and Malamocco have been used and the exchanges through the three inlets of the Venice Lagoon have been analyzed. The comparison between modelled and measured fluxes at the inlets outlined the efficiency of the model to reproduce both tide and wind induced water exchanges between the sea and the lagoon. As a second step, also small scale processes around the inlets that connect the Venice lagoon with the Northern Adriatic Sea have been investigated by means of 3D simulations. Maps of vorticity have been produced, considering the influence of tidal flows and wind stress in the area. A sensitivity analysis has been carried out to define the importance of the advection and of the baroclinic pressure gradients in the development of vortical processes seen along the littoral close to the inlets. Finally a comparison with real data measurements, surface velocity data from HF Radar near the Venice inlets, has been performed, which allows for a better understanding of the processes and their seasonal dynamics. The results outline the predominance of wind and tidal forcing in the coastal area. Wind forcing acts mainly on the mean coastal current inducing its detachment offshore during Sirocco events and an increase of littoral currents during Bora events. The Bora action is more homogeneous on the whole coastal area whereas the Sirocco strengthens its impact in the South, near Chioggia inlet. Tidal forcing at the inlets is mainly barotropic. The sensitivity analysis shows how advection is the main physical process responsible for the persistent vortical structures present along the littoral between the Venice Lagoon inlets. The comparison with measurements from HF Radar not only permitted a validation the model results, but also a description of different patterns in specific periods of the year. The success of the 2D and the 3D simulations on the reproduction both of the SSE, inside and outside the Venice Lagoon, of the tidal flow, through the lagoon inlets, and of the small scale phenomena, occurring along the littoral, indicates that the finite element approach is the most suitable tool for the investigation of coastal processes. For the first time, as shown by the flux modeling, the physical processes that drive the interaction between the two basins were reproduced.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In such territories where food production is mostly scattered in several small / medium size or even domestic farms, a lot of heterogeneous residues are produced yearly, since farmers usually carry out different activities in their properties. The amount and composition of farm residues, therefore, widely change during year, according to the single production process periodically achieved. Coupling high efficiency micro-cogeneration energy units with easy handling biomass conversion equipments, suitable to treat different materials, would provide many important advantages to the farmers and to the community as well, so that the increase in feedstock flexibility of gasification units is nowadays seen as a further paramount step towards their wide spreading in rural areas and as a real necessity for their utilization at small scale. Two main research topics were thought to be of main concern at this purpose, and they were therefore discussed in this work: the investigation of fuels properties impact on gasification process development and the technical feasibility of small scale gasification units integration with cogeneration systems. According to these two main aspects, the present work was thus divided in two main parts. The first one is focused on the biomass gasification process, that was investigated in its theoretical aspects and then analytically modelled in order to simulate thermo-chemical conversion of different biomass fuels, such as wood (park waste wood and softwood), wheat straw, sewage sludge and refuse derived fuels. The main idea is to correlate the results of reactor design procedures with the physical properties of biomasses and the corresponding working conditions of gasifiers (temperature profile, above all), in order to point out the main differences which prevent the use of the same conversion unit for different materials. At this scope, a gasification kinetic free model was initially developed in Excel sheets, considering different values of air to biomass ratio and the downdraft gasification technology as particular examined application. The differences in syngas production and working conditions (process temperatures, above all) among the considered fuels were tried to be connected to some biomass properties, such elementary composition, ash and water contents. The novelty of this analytical approach was the use of kinetic constants ratio in order to determine oxygen distribution among the different oxidation reactions (regarding volatile matter only) while equilibrium of water gas shift reaction was considered in gasification zone, by which the energy and mass balances involved in the process algorithm were linked together, as well. Moreover, the main advantage of this analytical tool is the easiness by which the input data corresponding to the particular biomass materials can be inserted into the model, so that a rapid evaluation on their own thermo-chemical conversion properties is possible to be obtained, mainly based on their chemical composition A good conformity of the model results with the other literature and experimental data was detected for almost all the considered materials (except for refuse derived fuels, because of their unfitting chemical composition with the model assumptions). Successively, a dimensioning procedure for open core downdraft gasifiers was set up, by the analysis on the fundamental thermo-physical and thermo-chemical mechanisms which are supposed to regulate the main solid conversion steps involved in the gasification process. Gasification units were schematically subdivided in four reaction zones, respectively corresponding to biomass heating, solids drying, pyrolysis and char gasification processes, and the time required for the full development of each of these steps was correlated to the kinetics rates (for pyrolysis and char gasification processes only) and to the heat and mass transfer phenomena from gas to solid phase. On the basis of this analysis and according to the kinetic free model results and biomass physical properties (particles size, above all) it was achieved that for all the considered materials char gasification step is kinetically limited and therefore temperature is the main working parameter controlling this step. Solids drying is mainly regulated by heat transfer from bulk gas to the inner layers of particles and the corresponding time especially depends on particle size. Biomass heating is almost totally achieved by the radiative heat transfer from the hot walls of reactor to the bed of material. For pyrolysis, instead, working temperature, particles size and the same nature of biomass (through its own pyrolysis heat) have all comparable weights on the process development, so that the corresponding time can be differently depending on one of these factors according to the particular fuel is gasified and the particular conditions are established inside the gasifier. The same analysis also led to the estimation of reaction zone volumes for each biomass fuel, so as a comparison among the dimensions of the differently fed gasification units was finally accomplished. Each biomass material showed a different volumes distribution, so that any dimensioned gasification unit does not seem to be suitable for more than one biomass species. Nevertheless, since reactors diameters were found out quite similar for all the examined materials, it could be envisaged to design a single units for all of them by adopting the largest diameter and by combining together the maximum heights of each reaction zone, as they were calculated for the different biomasses. A total height of gasifier as around 2400mm would be obtained in this case. Besides, by arranging air injecting nozzles at different levels along the reactor, gasification zone could be properly set up according to the particular material is in turn gasified. Finally, since gasification and pyrolysis times were found to considerably change according to even short temperature variations, it could be also envisaged to regulate air feeding rate for each gasified material (which process temperatures depend on), so as the available reactor volumes would be suitable for the complete development of solid conversion in each case, without even changing fluid dynamics behaviour of the unit as well as air/biomass ratio in noticeable measure. The second part of this work dealt with the gas cleaning systems to be adopted downstream the gasifiers in order to run high efficiency CHP units (i.e. internal engines and micro-turbines). Especially in the case multi–fuel gasifiers are assumed to be used, weightier gas cleaning lines need to be envisaged in order to reach the standard gas quality degree required to fuel cogeneration units. Indeed, as the more heterogeneous feed to the gasification unit, several contaminant species can simultaneously be present in the exit gas stream and, as a consequence, suitable gas cleaning systems have to be designed. In this work, an overall study on gas cleaning lines assessment is carried out. Differently from the other research efforts carried out in the same field, the main scope is to define general arrangements for gas cleaning lines suitable to remove several contaminants from the gas stream, independently on the feedstock material and the energy plant size The gas contaminant species taken into account in this analysis were: particulate, tars, sulphur (in H2S form), alkali metals, nitrogen (in NH3 form) and acid gases (in HCl form). For each of these species, alternative cleaning devices were designed according to three different plant sizes, respectively corresponding with 8Nm3/h, 125Nm3/h and 350Nm3/h gas flows. Their performances were examined on the basis of their optimal working conditions (efficiency, temperature and pressure drops, above all) and their own consumption of energy and materials. Successively, the designed units were combined together in different overall gas cleaning line arrangements, paths, by following some technical constraints which were mainly determined from the same performance analysis on the cleaning units and from the presumable synergic effects by contaminants on the right working of some of them (filters clogging, catalysts deactivation, etc.). One of the main issues to be stated in paths design accomplishment was the tars removal from the gas stream, preventing filters plugging and/or line pipes clogging At this scope, a catalytic tars cracking unit was envisaged as the only solution to be adopted, and, therefore, a catalytic material which is able to work at relatively low temperatures was chosen. Nevertheless, a rapid drop in tars cracking efficiency was also estimated for this same material, so that an high frequency of catalysts regeneration and a consequent relevant air consumption for this operation were calculated in all of the cases. Other difficulties had to be overcome in the abatement of alkali metals, which condense at temperatures lower than tars, but they also need to be removed in the first sections of gas cleaning line in order to avoid corrosion of materials. In this case a dry scrubber technology was envisaged, by using the same fine particles filter units and by choosing for them corrosion resistant materials, like ceramic ones. Besides these two solutions which seem to be unavoidable in gas cleaning line design, high temperature gas cleaning lines were not possible to be achieved for the two larger plant sizes, as well. Indeed, as the use of temperature control devices was precluded in the adopted design procedure, ammonia partial oxidation units (as the only considered methods for the abatement of ammonia at high temperature) were not suitable for the large scale units, because of the high increase of reactors temperature by the exothermic reactions involved in the process. In spite of these limitations, yet, overall arrangements for each considered plant size were finally designed, so that the possibility to clean the gas up to the required standard degree was technically demonstrated, even in the case several contaminants are simultaneously present in the gas stream. Moreover, all the possible paths defined for the different plant sizes were compared each others on the basis of some defined operational parameters, among which total pressure drops, total energy losses, number of units and secondary materials consumption. On the basis of this analysis, dry gas cleaning methods proved preferable to the ones including water scrubber technology in al of the cases, especially because of the high water consumption provided by water scrubber units in ammonia adsorption process. This result is yet connected to the possibility to use activated carbon units for ammonia removal and Nahcolite adsorber for chloride acid. The very high efficiency of this latter material is also remarkable. Finally, as an estimation of the overall energy loss pertaining the gas cleaning process, the total enthalpy losses estimated for the three plant sizes were compared with the respective gas streams energy contents, these latter obtained on the basis of low heating value of gas only. This overall study on gas cleaning systems is thus proposed as an analytical tool by which different gas cleaning line configurations can be evaluated, according to the particular practical application they are adopted for and the size of cogeneration unit they are connected to.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents hybrid Constraint Programming (CP) and metaheuristic methods for the solution of Large Scale Optimization Problems; it aims at integrating concepts and mechanisms from the metaheuristic methods to a CP-based tree search environment in order to exploit the advantages of both approaches. The modeling and solution of large scale combinatorial optimization problem is a topic which has arisen the interest of many researcherers in the Operations Research field; combinatorial optimization problems are widely spread in everyday life and the need of solving difficult problems is more and more urgent. Metaheuristic techniques have been developed in the last decades to effectively handle the approximate solution of combinatorial optimization problems; we will examine metaheuristics in detail, focusing on the common aspects of different techniques. Each metaheuristic approach possesses its own peculiarities in designing and guiding the solution process; our work aims at recognizing components which can be extracted from metaheuristic methods and re-used in different contexts. In particular we focus on the possibility of porting metaheuristic elements to constraint programming based environments, as constraint programming is able to deal with feasibility issues of optimization problems in a very effective manner. Moreover, CP offers a general paradigm which allows to easily model any type of problem and solve it with a problem-independent framework, differently from local search and metaheuristic methods which are highly problem specific. In this work we describe the implementation of the Local Branching framework, originally developed for Mixed Integer Programming, in a CP-based environment. Constraint programming specific features are used to ease the search process, still mantaining an absolute generality of the approach. We also propose a search strategy called Sliced Neighborhood Search, SNS, that iteratively explores slices of large neighborhoods of an incumbent solution by performing CP-based tree search and encloses concepts from metaheuristic techniques. SNS can be used as a stand alone search strategy, but it can alternatively be embedded in existing strategies as intensification and diversification mechanism. In particular we show its integration within the CP-based local branching. We provide an extensive experimental evaluation of the proposed approaches on instances of the Asymmetric Traveling Salesman Problem and of the Asymmetric Traveling Salesman Problem with Time Windows. The proposed approaches achieve good results on practical size problem, thus demonstrating the benefit of integrating metaheuristic concepts in CP-based frameworks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main goals of this Ph.D. study are to investigate the regional and global geophysical components related to present polar ice melting and to provide independent cross validation checks of GIA models using both geophysical data detected by satellite mission, and geological observations from far field sites, in order to determine a lower and upper bound of uncertainty of GIA effect. The subject of this Thesis is the sea level change from decades to millennia scale. Within ice2sea collaboration, we developed a Fortran numerical code to analyze the local short-term sea level change and vertical deformation resulting from the loss of ice mass. This method is used to investigate polar regions: Greenland and Antarctica. We have used mass balance based on ICESat data for Greenland ice sheet and a plausible mass balance for Antarctic ice sheet. We have determined the regional and global fingerprint of sea level variations, vertical deformations of the solid surface of the Earth and variations of shape of the geoid for each ice source mentioned above. The coastal areas are affected by the long wavelength component of GIA process. Hence understanding the response of the Earth to loading is crucial in various contexts. Based on the hypothesis that Earth mantle materials obey to a linear rheology, and that the physical parameters of this rheology can be only characterized by their depth dependence, we investigate the Glacial Isostatic Effect upon the far field sites of Mediterranean area using an improved SELEN program. We presented new and revised observations for archaeological fish tanks located along the Tyrrhenian and Adriatic coast of Italy and new RSL for the SE Tunisia. Spatial and temporal variations of the Holocene sea levels studied in central Italy and Tunisia, provided important constraints on the melting history of the major ice sheets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In der vorliegenden Arbeit wird zum einen ein Instrument zur Erfassung der Patient-Therapeut-Bindung validiert (Client Attachment to Therapist Scale, CATS; Mallinckrodt, Coble & Gantt, 1995), zum anderen werden Hypothesen zu den Zusammenhängen zwischen Selbstwirksamkeitserwartung, allgemeinem Bindungsstil, therapeutischer Beziehung (bzw. Therapiezufriedenheit), Patient-Therapeut-Bindung und Therapieerfolg bei Drogen-abhängigen in stationärer Postakutbehandlung überprüft. In die Instrumentenvalidierung (einwöchiger Retest) wurden 119 Patienten aus 2 Kliniken und 13 Experten einbezogen. Die Gütekriterien des Instrumentes fallen sehr zufriedenstellend aus. An der naturalistischen Therapieevaluationsstudie (Prä-, Prozess-, Post-Messung: T0, T1, T2) nahmen 365 Patienten und 27 Therapeuten aus 4 Kliniken teil. Insgesamt beendeten 44,1% der Patienten ihren stationären Aufenthalt planmäßig. Auf Patientenseite erweisen sich Alter und Hauptdiagnose, auf Therapeutenseite die praktizierte Therapierichtung als Therapieerfolgsprädiktoren. Selbstwirksamkeitserwartung, allgemeiner Bindungsstil, Patient-Therapeut-Bindung und Therapiezufriedenheit eignen sich nicht zur Prognose des Therapieerfolgs. Die zu T0 stark unterdurchschnittlich ausgeprägte Selbstwirksamkeits-erwartung steigert sich über den Interventionszeitraum, wobei sich ein Moderatoreffekt der Patient-Therapeut-Bindung beobachten lässt. Es liegt eine hohe Prävalenz unsicherer allgemeiner Bindungsstile vor, welche sich über den Therapiezeitraum nicht verändern. Die patientenseitige Zufriedenheit mit der Therapie steigt von T1 zu T2 an. Die Interrater-Konkordanz (Patient/Therapeut) zur Einschätzung der Patient-Therapeut-Bindung erhöht sich leicht von T1 zu T2. Im Gegensatz dazu wird die Therapiezufriedenheit von Patienten und Therapeuten zu beiden Messzeitpunkten sehr unterschiedlich beurteilt. Die guten Testgütekriterien der CATS sprechen für eine Überlegenheit dieses Instrumentes gegenüber der Skala zur Erfassung der Therapiezufriedenheit. Deshalb sollte die Patient-Therapeut-Bindung anhand dieses Instrumentes in weiteren Forschungsarbeiten an anderen Patientenkollektiven untersucht werden, um generalisierbare Aussagen zur Validität treffen zu können.