927 resultados para timing constraint
Resumo:
Background: Previous studies show that chronic hemiparetic patients after stroke, presents inabilities to perform movements in paretic hemibody. This inability is induced by positive reinforcement of unsuccessful attempts, a concept called learned non-use. Forced use therapy (FUT) and constraint induced movement therapy (CIMT) were developed with the goal of reversing the learned non-use. These approaches have been proposed for the rehabilitation of the paretic upper limb (PUL). It is unknown what would be the possible effects of these approaches in the rehabilitation of gait and balance. Objectives: To evaluate the effect of Modified FUT (mFUT) and Modified CIMT (mCIMT) on the gait and balance during four weeks of treatment and 3 months follow-up. Methods: This study included thirty-seven hemiparetic post-stroke subjects that were randomly allocated into two groups based on the treatment protocol. The non-paretic UL was immobilized for a period of 23 hours per day, five days a week. Participants were evaluated at Baseline, 1st, 2nd, 3rd and 4th weeks, and three months after randomization. For the evaluation we used: The Stroke Impact Scale (SIS), Berg Balance Scale (BBS) and Fugl-Meyer Motor Assessment (FM). Gait was analyzed by the 10-meter walk test (T10) and Timed Up & Go test (TUG). Results: Both groups revealed a better health status (SIS), better balance, better use of lower limb (BBS and FM) and greater speed in gait (T10 and TUG), during the weeks of treatment and months of follow-up, compared to the baseline. Conclusion: The results show mFUT and mCIMT are effective in the rehabilitation of balance and gait. Trial Registration ACTRN12611000411943.
Resumo:
The discrepancies between social and biological timing are reflected in shift workers' well-being. The aim of this study was to verify the association between job satisfaction and chronotype among day and night nursing personnel. Several variables, including seniority at the hospital and, in the same shift, sleep duration, quality of sleep, sleepiness and willingness to change sleep timing were also analyzed. Chronotype was calculated by using the morningness-eveningness questionnaire. We studied 514 nursing professionals from a public university hospital. Among the day workers, the higher the morningness, the more the workers were satisfied with their job. In contrast, among night workers, job satisfaction was associated with sleep quality and seniority at the hospital but not with chronotype. Our results suggest that an agreement between work schedule and chronotype may help to increase job satisfaction among diurnal workers.
Resumo:
Purpose: To estimate the metabolic activity of rectal cancers at 6 and 12 weeks after completion of chemoradiation therapy (CRT) by 2-[fluorine-18] fluoro-2-deoxy-D-glucose-labeled positron emission tomography/computed tomography ([18 FDG] PET/CT) imaging and correlate with response to CRT. Methods and Materials: Patients with cT2-4N0-2M0 distal rectal adenocarcinoma treated with long-course neoadjuvant CRT (54 Gy, 5-fluouracil-based) were prospectively studied (ClinicalTrials. org identifier NCT00254683). All patients underwent 3 PET/CT studies (at baseline and 6 and 12 weeks fromCRT completion). Clinical assessment was at 12 weeks. Maximal standard uptakevalue (SUVmax) of the primary tumor wasmeasured and recorded at eachPET/CTstudy after 1 h (early) and3 h (late) from 18 FDGinjection. Patientswith an increase in early SUVmax between 6 and 12 weeks were considered " bad" responders and the others as "good" responders. Results: Ninety-one patients were included; 46 patients (51%) were "bad" responders, whereas 45 (49%) patients were " good" responders. " Bad" responders were less likely to develop complete clinical response (6.5% vs. 37.8%, respectively; PZ. 001), less likely to develop significant histological tumor regression (complete or near-complete pathological response; 16% vs. 45%, respectively; PZ. 008) and exhibited greater final tumor dimension (4.3cmvs. 3.3cm; PZ. 03). Decrease between early (1 h) and late (3 h) SUVmax at 6-week PET/CTwas a significant predictor of " good" response (accuracy of 67%). Conclusions: Patients who developed an increase in SUVmax after 6 weeks were less likely to develop significant tumor downstaging. Early-late SUVmax variation at 6-week PET/CT may help identify these patients and allow tailored selection of CRT-surgery intervals for individual patients. (C) 2012 Elsevier Inc.
Resumo:
We present two new constraint qualifications (CQs) that are weaker than the recently introduced relaxed constant positive linear dependence (RCPLD) CQ. RCPLD is based on the assumption that many subsets of the gradients of the active constraints preserve positive linear dependence locally. A major open question was to identify the exact set of gradients whose properties had to be preserved locally and that would still work as a CQ. This is done in the first new CQ, which we call the constant rank of the subspace component (CRSC) CQ. This new CQ also preserves many of the good properties of RCPLD, such as local stability and the validity of an error bound. We also introduce an even weaker CQ, called the constant positive generator (CPG), which can replace RCPLD in the analysis of the global convergence of algorithms. We close this work by extending convergence results of algorithms belonging to all the main classes of nonlinear optimization methods: sequential quadratic programming, augmented Lagrangians, interior point algorithms, and inexact restoration.
Resumo:
We study general properties of the Landau-gauge Gribov ghost form factor sigma(p(2)) for SU(N-c) Yang-Mills theories in the d-dimensional case. We find a qualitatively different behavior for d = 3, 4 with respect to the d = 2 case. In particular, considering any (sufficiently regular) gluon propagator D(p(2)) and the one-loop-corrected ghost propagator, we prove in the 2d case that the function sigma(p(2)) blows up in the infrared limit p -> 0 as -D(0) ln(p(2)). Thus, for d = 2, the no-pole condition sigma(p(2)) < 1 (for p(2) > 0) can be satisfied only if the gluon propagator vanishes at zero momentum, that is, D(0) = 0. On the contrary, in d = 3 and 4, sigma(p(2)) is finite also if D(0) > 0. The same results are obtained by evaluating the ghost propagator G(p(2)) explicitly at one loop, using fitting forms for D(p(2)) that describe well the numerical data of the gluon propagator in two, three and four space-time dimensions in the SU(2) case. These evaluations also show that, if one considers the coupling constant g(2) as a free parameter, the ghost propagator admits a one-parameter family of behaviors (labeled by g(2)), in agreement with previous works by Boucaud et al. In this case the condition sigma(0) <= 1 implies g(2) <= g(c)(2), where g(c)(2) is a "critical" value. Moreover, a freelike ghost propagator in the infrared limit is obtained for any value of g(2) smaller than g(c)(2), while for g(2) = g(c)(2) one finds an infrared-enhanced ghost propagator. Finally, we analyze the Dyson-Schwinger equation for sigma(p(2)) and show that, for infrared-finite ghost-gluon vertices, one can bound the ghost form factor sigma(p(2)). Using these bounds we find again that only in the d = 2 case does one need to impose D(0) = 0 in order to satisfy the no-pole condition. The d = 2 result is also supported by an analysis of the Dyson-Schwinger equation using a spectral representation for the ghost propagator. Thus, if the no-pole condition is imposed, solving the d = 2 Dyson-Schwinger equations cannot lead to a massive behavior for the gluon propagator. These results apply to any Gribov copy inside the so-called first Gribov horizon; i.e., the 2d result D(0) = 0 is not affected by Gribov noise. These findings are also in agreement with lattice data.
Resumo:
Abstract Background A popular model for gene regulatory networks is the Boolean network model. In this paper, we propose an algorithm to perform an analysis of gene regulatory interactions using the Boolean network model and time-series data. Actually, the Boolean network is restricted in the sense that only a subset of all possible Boolean functions are considered. We explore some mathematical properties of the restricted Boolean networks in order to avoid the full search approach. The problem is modeled as a Constraint Satisfaction Problem (CSP) and CSP techniques are used to solve it. Results We applied the proposed algorithm in two data sets. First, we used an artificial dataset obtained from a model for the budding yeast cell cycle. The second data set is derived from experiments performed using HeLa cells. The results show that some interactions can be fully or, at least, partially determined under the Boolean model considered. Conclusions The algorithm proposed can be used as a first step for detection of gene/protein interactions. It is able to infer gene relationships from time-series data of gene expression, and this inference process can be aided by a priori knowledge available.
Resumo:
Abstract Background The time synchronization is a very important ability for the acquisition and performance of motor skills that generate the need to adapt the actions of body segments to external events of the environment that are changing their position in space. Down Syndrome (DS) individuals may present some deficits to perform tasks with synchronization demand. We aimed to investigate the performance of individuals with DS in a simple Coincident Timing task. Method 32 individuals were divided into 2 groups: the Down syndrome group (DSG) comprised of 16 individuals with average age of 20 (+/− 5 years old), and a control group (CG) comprised of 16 individuals of the same age. All individuals performed the Simple Timing (ST) task and their performance was measured in milliseconds. The study was conducted in a single phase with the execution of 20 consecutive trials for each participant. Results There was a significant difference in the intergroup analysis for the accuracy adjustment - Absolute Error (Z = 3.656, p = 0.001); and for the performance consistence - Variable Error (Z = 2.939, p = 0.003). Conclusion DS individuals have more difficulty in integrating the motor action to an external stimulus and they also present more inconsistence in performance. Both groups presented the same tendency to delay their motor responses.
Resumo:
This thesis intends to investigate two aspects of Constraint Handling Rules (CHR). It proposes a compositional semantics and a technique for program transformation. CHR is a concurrent committed-choice constraint logic programming language consisting of guarded rules, which transform multi-sets of atomic formulas (constraints) into simpler ones until exhaustion [Frü06] and it belongs to the declarative languages family. It was initially designed for writing constraint solvers but it has recently also proven to be a general purpose language, being as it is Turing equivalent [SSD05a]. Compositionality is the first CHR aspect to be considered. A trace based compositional semantics for CHR was previously defined in [DGM05]. The reference operational semantics for such a compositional model was the original operational semantics for CHR which, due to the propagation rule, admits trivial non-termination. In this thesis we extend the work of [DGM05] by introducing a more refined trace based compositional semantics which also includes the history. The use of history is a well-known technique in CHR which permits us to trace the application of propagation rules and consequently it permits trivial non-termination avoidance [Abd97, DSGdlBH04]. Naturally, the reference operational semantics, of our new compositional one, uses history to avoid trivial non-termination too. Program transformation is the second CHR aspect to be considered, with particular regard to the unfolding technique. Said technique is an appealing approach which allows us to optimize a given program and in more detail to improve run-time efficiency or spaceconsumption. Essentially it consists of a sequence of syntactic program manipulations which preserve a kind of semantic equivalence called qualified answer [Frü98], between the original program and the transformed ones. The unfolding technique is one of the basic operations which is used by most program transformation systems. It consists in the replacement of a procedure-call by its definition. In CHR every conjunction of constraints can be considered as a procedure-call, every CHR rule can be considered as a procedure and the body of said rule represents the definition of the call. While there is a large body of literature on transformation and unfolding of sequential programs, very few papers have addressed this issue for concurrent languages. We define an unfolding rule, show its correctness and discuss some conditions in which it can be used to delete an unfolded rule while preserving the meaning of the original program. Finally, confluence and termination maintenance between the original and transformed programs are shown. This thesis is organized in the following manner. Chapter 1 gives some general notion about CHR. Section 1.1 outlines the history of programming languages with particular attention to CHR and related languages. Then, Section 1.2 introduces CHR using examples. Section 1.3 gives some preliminaries which will be used during the thesis. Subsequentely, Section 1.4 introduces the syntax and the operational and declarative semantics for the first CHR language proposed. Finally, the methodologies to solve the problem of trivial non-termination related to propagation rules are discussed in Section 1.5. Chapter 2 introduces a compositional semantics for CHR where the propagation rules are considered. In particular, Section 2.1 contains the definition of the semantics. Hence, Section 2.2 presents the compositionality results. Afterwards Section 2.3 expounds upon the correctness results. Chapter 3 presents a particular program transformation known as unfolding. This transformation needs a particular syntax called annotated which is introduced in Section 3.1 and its related modified operational semantics !0t is presented in Section 3.2. Subsequently, Section 3.3 defines the unfolding rule and prove its correctness. Then, in Section 3.4 the problems related to the replacement of a rule by its unfolded version are discussed and this in turn gives a correctness condition which holds for a specific class of rules. Section 3.5 proves that confluence and termination are preserved by the program modifications introduced. Finally, Chapter 4 concludes by discussing related works and directions for future work.
Resumo:
Curved mountain belts have always fascinated geologists and geophysicists because of their peculiar structural setting and geodynamic mechanisms of formation. The need of studying orogenic bends arises from the numerous questions to which geologists and geophysicists have tried to answer to during the last two decades, such as: what are the mechanisms governing orogenic bends formation? Why do they form? Do they develop in particular geological conditions? And if so, what are the most favorable conditions? What are their relationships with the deformational history of the belt? Why is the shape of arcuate orogens in many parts of the Earth so different? What are the factors controlling the shape of orogenic bends? Paleomagnetism demonstrated to be one of the most effective techniques in order to document the deformation of a curved belt through the determination of vertical axis rotations. In fact, the pattern of rotations within a curved belt can reveal the occurrence of a bending, and its timing. Nevertheless, paleomagnetic data alone are not sufficient to constrain the tectonic evolution of a curved belt. Usually, structural analysis integrates paleomagnetic data, in defining the kinematics of a belt through kinematic indicators on brittle fault planes (i.e., slickensides, mineral fibers growth, SC-structures). My research program has been focused on the study of curved mountain belts through paleomagnetism, in order to define their kinematics, timing, and mechanisms of formation. Structural analysis, performed only in some regions, supported and integrated paleomagnetic data. In particular, three arcuate orogenic systems have been investigated: the Western Alpine Arc (NW Italy), the Bolivian Orocline (Central Andes, NW Argentina), and the Patagonian Orocline (Tierra del Fuego, southern Argentina). The bending of the Western Alpine Arc has been investigated so far using different approaches, though few based on reliable paleomagnetic data. Results from our paleomagnetic study carried out in the Tertiary Piedmont Basin, located on top of Alpine nappes, indicate that the Western Alpine Arc is a primary bend that has been subsequently tightened by further ~50° during Aquitanian-Serravallian times (23-12 Ma). This mid-Miocene oroclinal bending, superimposing onto a pre-existing Eocene nonrotational arc, is the result of a composite geodynamic mechanism, where slab rollback, mantle flows, and rotating thrust emplacement are intimately linked. Relying on our paleomagnetic and structural evidence, the Bolivian Orocline can be considered as a progressive bend, whose formation has been driven by the along-strike gradient of crustal shortening. The documented clockwise rotations up to 45° are compatible with a secondary-bending type mechanism occurring after Eocene-Oligocene times (30-40 Ma), and their nature is probably related to the widespread shearing taking place between zones of differential shortening. Since ~15 Ma ago, the activity of N-S left-lateral strike-slip faults in the Eastern Cordillera at the border with the Altiplano-Puna plateau induced up to ~40° counterclockwise rotations along the fault zone, locally annulling the regional clockwise rotation. We proposed that mid-Miocene strike-slip activity developed in response of a compressive stress (related to body forces) at the plateau margins, caused by the progressive lateral (southward) growth of the Altiplano-Puna plateau, laterally spreading from the overthickened crustal region of the salient apex. The growth of plateaux by lateral spreading seems to be a mechanism common to other major plateaux in the Earth (i.e., Tibetan plateau). Results from the Patagonian Orocline represent the first reliable constraint to the timing of bending in the southern tip of South America. They indicate that the Patagonian Orocline did not undergo any significant rotation since early Eocene times (~50 Ma), implying that it may be considered either a primary bend, or an orocline formed during the late Cretaceous-early Eocene deformation phase. This result has important implications on the opening of the Drake Passage at ~32 Ma, since it is definitely not related to the formation of the Patagonian orocline, but the sole consequence of the Scotia plate spreading. Finally, relying on the results and implications from the study of the Western Alpine Arc, the Bolivian Orocline, and the Patagonian Orocline, general conclusions on curved mountain belt formation have been inferred.
Resumo:
Nel lavoro di tesi qui presentato si indaga l'applicazione di tecniche di apprendimento mirate ad una più efficiente esecuzione di un portfolio di risolutore di vincoli (constraint solver). Un constraint solver è un programma che dato in input un problema di vincoli, elabora una soluzione mediante l'utilizzo di svariate tecniche. I problemi di vincoli sono altamente presenti nella vita reale. Esempi come l'organizzazione dei viaggi dei treni oppure la programmazione degli equipaggi di una compagnia aerea, sono tutti problemi di vincoli. Un problema di vincoli è formalizzato da un problema di soddisfacimento di vincoli(CSP). Un CSP è descritto da un insieme di variabili che possono assumere valori appartenenti ad uno specico dominio ed un insieme di vincoli che mettono in relazione variabili e valori assumibili da esse. Una tecnica per ottimizzare la risoluzione di tali problemi è quella suggerita da un approccio a portfolio. Tale tecnica, usata anche in am- biti come quelli economici, prevede la combinazione di più solver i quali assieme possono generare risultati migliori di un approccio a singolo solver. In questo lavoro ci preoccupiamo di creare una nuova tecnica che combina un portfolio di constraint solver con tecniche di machine learning. Il machine learning è un campo di intelligenza articiale che si pone l'obiettivo di immettere nelle macchine una sorta di `intelligenza'. Un esempio applicativo potrebbe essere quello di valutare i casi passati di un problema ed usarli in futuro per fare scelte. Tale processo è riscontrato anche a livello cognitivo umano. Nello specico, vogliamo ragionare in termini di classicazione. Una classicazione corrisponde ad assegnare ad un insieme di caratteristiche in input, un valore discreto in output, come vero o falso se una mail è classicata come spam o meno. La fase di apprendimento sarà svolta utilizzando una parte di CPHydra, un portfolio di constraint solver sviluppato presso la University College of Cork (UCC). Di tale algoritmo a portfolio verranno utilizzate solamente le caratteristiche usate per descrivere determinati aspetti di un CSP rispetto ad un altro; queste caratteristiche vengono altresì dette features. Creeremo quindi una serie di classicatori basati sullo specifico comportamento dei solver. La combinazione di tali classicatori con l'approccio a portfolio sara nalizzata allo scopo di valutare che le feature di CPHydra siano buone e che i classicatori basati su tali feature siano affidabili. Per giusticare il primo risultato, eettueremo un confronto con uno dei migliori portfolio allo stato dell'arte, SATzilla. Una volta stabilita la bontà delle features utilizzate per le classicazioni, andremo a risolvere i problemi simulando uno scheduler. Tali simulazioni testeranno diverse regole costruite con classicatori precedentemente introdotti. Prima agiremo su uno scenario ad un processore e successivamente ci espanderemo ad uno scenario multi processore. In questi esperimenti andremo a vericare che, le prestazioni ottenute tramite l'applicazione delle regole create appositamente sui classicatori, abbiano risultati migliori rispetto ad un'esecuzione limitata all'utilizzo del migliore solver del portfolio. I lavoro di tesi è stato svolto in collaborazione con il centro di ricerca 4C presso University College Cork. Su questo lavoro è stato elaborato e sottomesso un articolo scientico alla International Joint Conference of Articial Intelligence (IJCAI) 2011. Al momento della consegna della tesi non siamo ancora stati informati dell'accettazione di tale articolo. Comunque, le risposte dei revisori hanno indicato che tale metodo presentato risulta interessante.
Resumo:
Il lavoro presentato in questa tesi si colloca nel contesto della programmazione con vincoli, un paradigma per modellare e risolvere problemi di ricerca combinatoria che richiedono di trovare soluzioni in presenza di vincoli. Una vasta parte di questi problemi trova naturale formulazione attraverso il linguaggio delle variabili insiemistiche. Dal momento che il dominio di tali variabili può essere esponenziale nel numero di elementi, una rappresentazione esplicita è spesso non praticabile. Recenti studi si sono quindi focalizzati nel trovare modi efficienti per rappresentare tali variabili. Pertanto si è soliti rappresentare questi domini mediante l'uso di approssimazioni definite tramite intervalli (d'ora in poi rappresentazioni), specificati da un limite inferiore e un limite superiore secondo un'appropriata relazione d'ordine. La recente evoluzione della ricerca sulla programmazione con vincoli sugli insiemi ha chiaramente indicato che la combinazione di diverse rappresentazioni permette di raggiungere prestazioni di ordini di grandezza superiori rispetto alle tradizionali tecniche di codifica. Numerose proposte sono state fatte volgendosi in questa direzione. Questi lavori si differenziano su come è mantenuta la coerenza tra le diverse rappresentazioni e su come i vincoli vengono propagati al fine di ridurre lo spazio di ricerca. Sfortunatamente non esiste alcun strumento formale per paragonare queste combinazioni. Il principale obiettivo di questo lavoro è quello di fornire tale strumento, nel quale definiamo precisamente la nozione di combinazione di rappresentazioni facendo emergere gli aspetti comuni che hanno caratterizzato i lavori precedenti. In particolare identifichiamo due tipi possibili di combinazioni, una forte ed una debole, definendo le nozioni di coerenza agli estremi sui vincoli e sincronizzazione tra rappresentazioni. Il nostro studio propone alcune interessanti intuizioni sulle combinazioni esistenti, evidenziandone i limiti e svelando alcune sorprese. Inoltre forniamo un'analisi di complessità della sincronizzazione tra minlex, una rappresentazione in grado di propagare in maniera ottimale vincoli lessicografici, e le principali rappresentazioni esistenti.
Resumo:
At the light of what happened in 2010 and 2011, a lot of European countries founded themselves in a difficult position where all the credit rating agencies were downgrading debt states. Problem of solvency and guarantees on the states' bond were perceived as too risky for a Monetary Union as Europe is. Fear of a contagion from Greece as well was threatening the other countries as Italy, Spain, Portugal and Ireland; while Germany and France asked for a division between risky and riskless bond in order to feel more safe. Our paper gets inspiration by Roch and Uhlig (2011), it refers to the Argentinian case examined by Arellano (2008) and examine possible interventions as monetization or bailout as proposed by Cole and Kehoe (2000). We propose a model in which a state defaults and cannot repay a fraction of the old bond; but contrary to Roch and Uhlig that where considering a one-time cost of default we consider default as an accumulation of losses, perceived as unpaid fractions of the old debts. Our contributions to literature is that default immediately imply that economy faces a bad period and, accumulating losses, government will be worse-off. We studied a function for this accumulation of debt period by period, in order to get an idea of the magnitude of this waste of resources that economy will face when experiences a default. Our thesis is that bailouts just postpone the day of reckoning (Roch, Uhlig); so it's better to default before accumulate a lot of debts. What Europe need now is the introduction of new reforms in a controlled default where the Eurozone will be saved in its whole integrity and a state could fail with the future promise of a resurrection. As experience show us, governments are not interested into reducing debts since there are ECB interventions. That clearly create a distortion between countries in the same monetary union, giving to the states just an illusion about their future debtor position.
Resumo:
This work presents hybrid Constraint Programming (CP) and metaheuristic methods for the solution of Large Scale Optimization Problems; it aims at integrating concepts and mechanisms from the metaheuristic methods to a CP-based tree search environment in order to exploit the advantages of both approaches. The modeling and solution of large scale combinatorial optimization problem is a topic which has arisen the interest of many researcherers in the Operations Research field; combinatorial optimization problems are widely spread in everyday life and the need of solving difficult problems is more and more urgent. Metaheuristic techniques have been developed in the last decades to effectively handle the approximate solution of combinatorial optimization problems; we will examine metaheuristics in detail, focusing on the common aspects of different techniques. Each metaheuristic approach possesses its own peculiarities in designing and guiding the solution process; our work aims at recognizing components which can be extracted from metaheuristic methods and re-used in different contexts. In particular we focus on the possibility of porting metaheuristic elements to constraint programming based environments, as constraint programming is able to deal with feasibility issues of optimization problems in a very effective manner. Moreover, CP offers a general paradigm which allows to easily model any type of problem and solve it with a problem-independent framework, differently from local search and metaheuristic methods which are highly problem specific. In this work we describe the implementation of the Local Branching framework, originally developed for Mixed Integer Programming, in a CP-based environment. Constraint programming specific features are used to ease the search process, still mantaining an absolute generality of the approach. We also propose a search strategy called Sliced Neighborhood Search, SNS, that iteratively explores slices of large neighborhoods of an incumbent solution by performing CP-based tree search and encloses concepts from metaheuristic techniques. SNS can be used as a stand alone search strategy, but it can alternatively be embedded in existing strategies as intensification and diversification mechanism. In particular we show its integration within the CP-based local branching. We provide an extensive experimental evaluation of the proposed approaches on instances of the Asymmetric Traveling Salesman Problem and of the Asymmetric Traveling Salesman Problem with Time Windows. The proposed approaches achieve good results on practical size problem, thus demonstrating the benefit of integrating metaheuristic concepts in CP-based frameworks.
Resumo:
Timing of waiting list entrance for patients with cystic fibrosis in need of pulmonary transplant: the experience of a regional referral centre Objective: Evaluation of parameters that can predict a rapid decay of general conditions of patients affected by Cystic Fibrosis (CF) with no specific criteria to be candidate to pulmonary transplant. Material and methods: Fifteen patients with CF who died for complications and 8 who underwent lung transplantation in the 2000-2010 decade, were enrolled. Clinical data 2 years before the event (body max index, FEV1%, number of EV antibiotic treatments per year, colonization with Methicillin-resistant Staphylococcus aureus (MRSA), pseudomonas aeruginosa mucosus, burkholderia cepacia, pulmonary allergic aspergilosis) were compared among the 2 groups. Results: Mean FEV1% was significantly higher and mean number of antibiotic treatment was lower in deceased than in the transplanted patients (p<0.002 and p<0.001 respectively). Although in patients who died there were no including criteria to enter the transplant list 2 years before the exitus, suggestive findings such as low BMI (17.3), high incidence of hepatic pathology (33.3%), diabetes (50%), and infections with MRSA infection (25%), Pseudomonas aeruginosa (83.3%) and burkholderia cepacia (8.3%) were found with no statistical difference with transplanted patients, suggesting those patients were at risk of severe prognosis. In patients who died, females were double than males. Conclusion: While evaluating patients with CF, negative prognostic factors such as the ones investigated in this study, should be considered to select individuals with high mortality risk who need stricter therapeutical approach and follow up. Inclusion of those patients in the transplant waiting list should be taken into account.
Resumo:
This work presents exact algorithms for the Resource Allocation and Cyclic Scheduling Problems (RA&CSPs). Cyclic Scheduling Problems arise in a number of application areas, such as in hoist scheduling, mass production, compiler design (implementing scheduling loops on parallel architectures), software pipelining, and in embedded system design. The RA&CS problem concerns time and resource assignment to a set of activities, to be indefinitely repeated, subject to precedence and resource capacity constraints. In this work we present two constraint programming frameworks facing two different types of cyclic problems. In first instance, we consider the disjunctive RA&CSP, where the allocation problem considers unary resources. Instances are described through the Synchronous Data-flow (SDF) Model of Computation. The key problem of finding a maximum-throughput allocation and scheduling of Synchronous Data-Flow graphs onto a multi-core architecture is NP-hard and has been traditionally solved by means of heuristic (incomplete) algorithms. We propose an exact (complete) algorithm for the computation of a maximum-throughput mapping of applications specified as SDFG onto multi-core architectures. Results show that the approach can handle realistic instances in terms of size and complexity. Next, we tackle the Cyclic Resource-Constrained Scheduling Problem (i.e. CRCSP). We propose a Constraint Programming approach based on modular arithmetic: in particular, we introduce a modular precedence constraint and a global cumulative constraint along with their filtering algorithms. Many traditional approaches to cyclic scheduling operate by fixing the period value and then solving a linear problem in a generate-and-test fashion. Conversely, our technique is based on a non-linear model and tackles the problem as a whole: the period value is inferred from the scheduling decisions. The proposed approaches have been tested on a number of non-trivial synthetic instances and on a set of realistic industrial instances achieving good results on practical size problem.