948 resultados para Constraints-Led Approach
Resumo:
Background: The number of centenarians is rapidly increasing in Europe. In Portugal, it has almost tripled over the last 10 years and constitutes one of the fastest-growing segments of the population. This paper aims to describe the health and sociodemographic characteristics of Portuguese centenarians as given in the 2011 census and to identify sex differences. Methods: All persons living in Portugal mainland and Madeira and Azores islands aged 100 years old at the time of the 2011 census (N = 1,526) were considered. Measures include sociodemographic characteristics and perceived difficulties in six functional domains of basic actions (seeing, hearing, walking, cognition, self-care, and communication) as assessed by the Portuguese census official questionnaires. Results: Most centenarians are women (82.1 %), widowed (82 %), never attended school (51 %), and live in private households (71 %). The majority show major constraints in seeing (67.4 %), hearing (72.3 %), and particularly in their mobility (83.7 % cannot/have great difficulties in walking/climbing stairs and 80.7 % in bathing/dressing). In general, a better outcome was found for reported memory/concentration and understanding, with 39.1 % and 42.5 % presenting no or mild difficulty, respectively. Top-level functioning (no/mild difficulties in all dimensions concurrently) was observed in a minority of cases (5.96 %). Women outnumber men by a ratio of 4.6, and statistically significant differences were found between men and women for all health-related variables, with women presenting a higher percentage of difficulties. Conclusion: Portuguese centenarians experience great difficulties in sensory domains and basic daily living activities, and to a lesser extent in cognition and communication. The obtained profile, though self-reported, is important in considering the potential of social and family participation of this population regardless of their functional and sensory limitations. Based on the observed differences between men and women, gender-specific and gender-sensitive interventions are recommended in order to acknowledge women’s worse overall condition.
Resumo:
The literature clearly links the quality and capacity of a country’s infrastructure to its economic growth and competitiveness. This thesis analyses the historic national and spatial distribution of investment by the Irish state in its physical networks (water, wastewater and roads) across the 34 local authorities and examines how Ireland is perceived internationally relative to its economic counterparts. An appraisal of the current status and shortcomings of Ireland’s infrastructure is undertaken using key stakeholders from foreign direct investment companies and national policymakers to identify Ireland's infrastructural gaps, along with current challenges in how the country is delivering infrastructure. The output of these interviews identified many issues with how infrastructure decision-making is currently undertaken. This led to an evaluation of how other countries are informing decision-making, and thus this thesis presents a framework of how and why Ireland should embrace a Systems of Systems (SoS) methodology approach to infrastructure decision-making going forward. In undertaking this study a number of other infrastructure challenges were identified: significant political interference in infrastructure decision-making and delivery the need for a national agency to remove the existing ‘silo’ type of mentality to infrastructure delivery how tax incentives can interfere with the market; and their significance. The two key infrastructure gaps identified during the interview process were: the need for government intervention in the rollout of sufficient communication capacity and at a competitive cost outside of Dublin; and the urgent need to address water quality and capacity with approximately 25% of the population currently being served by water of unacceptable quality. Despite considerable investment in its national infrastructure, Ireland’s infrastructure performance continues to trail behind its economic partners in the Eurozone and OECD. Ireland is projected to have the highest growth rate in the euro zone region in 2015 and 2016, albeit that it required a bailout in 2010, and, at the time of writing, is beginning to invest in its infrastructure networks again. This thesis proposes the development and implementation of a SoS approach for infrastructure decision-making which would be based on: existing spatial and capacity data of each of the constituent infrastructure networks; and scenario computation and analysis of alternative drivers eg. Demographic change, economic variability and demand/capacity constraints. The output from such an analysis would provide valuable evidence upon which policy makers and decision makers alike could rely, which has been lacking in historic investment decisions.
Resumo:
The current approach to data analysis for the Laser Interferometry Space Antenna (LISA) depends on the time delay interferometry observables (TDI) which have to be generated before any weak signal detection can be performed. These are linear combinations of the raw data with appropriate time shifts that lead to the cancellation of the laser frequency noises. This is possible because of the multiple occurrences of the same noises in the different raw data. Originally, these observables were manually generated starting with LISA as a simple stationary array and then adjusted to incorporate the antenna's motions. However, none of the observables survived the flexing of the arms in that they did not lead to cancellation with the same structure. The principal component approach is another way of handling these noises that was presented by Romano and Woan which simplified the data analysis by removing the need to create them before the analysis. This method also depends on the multiple occurrences of the same noises but, instead of using them for cancellation, it takes advantage of the correlations that they produce between the different readings. These correlations can be expressed in a noise (data) covariance matrix which occurs in the Bayesian likelihood function when the noises are assumed be Gaussian. Romano and Woan showed that performing an eigendecomposition of this matrix produced two distinct sets of eigenvalues that can be distinguished by the absence of laser frequency noise from one set. The transformation of the raw data using the corresponding eigenvectors also produced data that was free from the laser frequency noises. This result led to the idea that the principal components may actually be time delay interferometry observables since they produced the same outcome, that is, data that are free from laser frequency noise. The aims here were (i) to investigate the connection between the principal components and these observables, (ii) to prove that the data analysis using them is equivalent to that using the traditional observables and (ii) to determine how this method adapts to real LISA especially the flexing of the antenna. For testing the connection between the principal components and the TDI observables a 10x 10 covariance matrix containing integer values was used in order to obtain an algebraic solution for the eigendecomposition. The matrix was generated using fixed unequal arm lengths and stationary noises with equal variances for each noise type. Results confirm that all four Sagnac observables can be generated from the eigenvectors of the principal components. The observables obtained from this method however, are tied to the length of the data and are not general expressions like the traditional observables, for example, the Sagnac observables for two different time stamps were generated from different sets of eigenvectors. It was also possible to generate the frequency domain optimal AET observables from the principal components obtained from the power spectral density matrix. These results indicate that this method is another way of producing the observables therefore analysis using principal components should give the same results as that using the traditional observables. This was proven by fact that the same relative likelihoods (within 0.3%) were obtained from the Bayesian estimates of the signal amplitude of a simple sinusoidal gravitational wave using the principal components and the optimal AET observables. This method fails if the eigenvalues that are free from laser frequency noises are not generated. These are obtained from the covariance matrix and the properties of LISA that are required for its computation are the phase-locking, arm lengths and noise variances. Preliminary results of the effects of these properties on the principal components indicate that only the absence of phase-locking prevented their production. The flexing of the antenna results in time varying arm lengths which will appear in the covariance matrix and, from our toy model investigations, this did not prevent the occurrence of the principal components. The difficulty with flexing, and also non-stationary noises, is that the Toeplitz structure of the matrix will be destroyed which will affect any computation methods that take advantage of this structure. In terms of separating the two sets of data for the analysis, this was not necessary because the laser frequency noises are very large compared to the photodetector noises which resulted in a significant reduction in the data containing them after the matrix inversion. In the frequency domain the power spectral density matrices were block diagonals which simplified the computation of the eigenvalues by allowing them to be done separately for each block. The results in general showed a lack of principal components in the absence of phase-locking except for the zero bin. The major difference with the power spectral density matrix is that the time varying arm lengths and non-stationarity do not show up because of the summation in the Fourier transform.
Resumo:
In the Iberian Variscides several first order arcuate structures have been considered. In spite of being highly studied their characterization, formation mechanisms and even existence is still debatable. Themain Ibero-Armorican Arc (IAA) is essentially defined by a predominantNW–SE trend in the Iberian branch and an E–Wtrend in the Brittany one. However, in northern Spain it presents a 180° rotation, sometimes known as the Cantabrian Arc (CA). The relation between both arcs is controversial, being considered either as a single arc due to one tectonic event, or as the result of a polyphasic process. According to the last assumption, there is a later arcuate structure (CA), overlapping a previousmajor one (IAA). Whatever themodels, they must be able to explain the presence of a Variscan sinistral transpression in Iberia and a dextral one in Armorica, and a deformation spanning from the Devonian to the Upper Carboniferous. Another arcuate structure, in continuity with the CA, the Central-Iberian Arc (CIA) was recently proposed mainly based upon on magnetic anomalies, geometry of major folds and Ordovician paleocurrents. The critical review of the structural, stratigraphic and geophysical data supports both the IAA and the CA, but as independent structures. However, the presence of a CIA is highly questionable and could not be supported. The complex strain pattern of the IAA and the CA could be explained by a Devonian — Carboniferous polyphasic indentation of a Gondwana promontory. In thismodel the CA is essentially a thin-skinned arc,while the IAA has a more complex and longer evolution that has led to a thick-skinned first order structure. Nevertheless, both arcs are essentially the result of a lithospheric bending process during the Iberian Variscides.
Resumo:
In this work we analyze an optimal control problem for a system of two hydroelectric power stations in cascade with reversible turbines. The objective is to optimize the profit of power production while respecting the system’s restrictions. Some of these restrictions translate into state constraints and the cost function is nonconvex. This increases the complexity of the optimal control problem. The problem is solved numerically and two different approaches are adopted. These approaches focus on global optimization techniques (Chen-Burer algorithm) and on a projection estimation refinement method (PERmethod). PERmethod is used as a technique to reduce the dimension of the problem. Results and execution time of the two procedures are compared.
Resumo:
Neuroblastoma (NB) is the deadliest cancer in early childhood. Around 25% of patients pre- sent MYCN-amplification (MNA) which is linked to poor prognosis, metastasis, and therapy- resistance. While retinoic acid (RA) is beneficial only for some NB patients, the cause of its resistance is still unknown. Thus, there remains a need for new therapies to treat NB. I show that MYCN-specific inhibition by the antigene oligonucleotide BGA002 in combination with 13-cis RA (BGA002-RA) overcome resistance in MNA-NB cell lines, leading to potent MYCN mRNA expression and protein decrease. Moreover, BGA002-RA reactivated neuron differentiation or led to apoptosis in MNA-NB cell lines, and inhibited invasiveness capacity. Since NB and PI3K/mTOR pathway are strictly related MYCN down-regulation by BGA002 led to mTOR pathway inhibition in MNA-NB, that was strengthened by BGA002-RA. I further analyzed if MYCN silencing may induce autophagy reactivation, and indeed BGA002-RA caused a massive increase in lysosomes and macrovacuoles in MNA-NB cells. In addition, while MYCN is known to induce angiogenesis, BGA002-RA in vivo treatment elim- inated the tumor vascularization in a MNA-NB mice model, and significantly increased the survival. Overall, these results indicate that MYCN modulation mediates the therapeutic efficacy of RA and the development of RA resistance in MNA-NB. Furthermore, by targeting MYCN, we show a cancer-specific way of mTOR pathway inhibition only in MNA-NB, avoiding side effects of targeting mTOR in normal cells. These findings warrant clinical testing of BGA002-RA as a potential strategy to overcome RA resistance in MNA-NB.
Resumo:
Barriers to technology adoption in teaching and learning are well documented, with a corresponding body of research focused on how these can be addressed. As a way to combine a variety of these adoption strategies, the University of Sheffield developed a Technology Enhanced Learning Festival, TELFest. This annual, week-long event, emphasises the role technology can play through an engaging learning experience which combines expert-led practical workshops, sharing of practice, discussions and presentations by practitioners. As the popularity of the event has grown and the range of topics expanded, a community of practice has organically coalesced among attendees, supporting the mainstream adoption of several technologies and helping to broaden educational innovation beyond isolated pockets. This paper situates TELFest within the technology adoption literature by providing details about TELFest, outlining the results of an investigation into the impact that it has had on attendees' teaching practice and summarising some of the limitations of the method along with reflections on how to address these limitations in the future.
Resumo:
From its domestication until nowadays, the horse has assumed multiple roles in human society. Over time, this condition and the lack of specific regulation have led to the development of different kinds of management systems for this species. This Ph.D. research project aims to investigate horses' welfare in different management practices and housing systems, considering a multidisciplinary approach, taking into account biological function, naturalness, and affective dimension. The results are presented in five articles that evidence risk factors that can mine horse welfare, and examine tools and parameters that can be employed for its assessment. Our research shows the importance of considering the evolutionary history and the species-specific and behavioural needs of horses in their management and housing. Sociality, free movement, diet composition and foraging routine, and the workload that these animals undergo are important factors that should be taken into account. Furthermore, this research has evidenced the importance of employing different parameters (e.g., behaviour, endocrinological parameters, and immune activity) in welfare assessment and proposes the use of horsehair DHEA (dehydroepiandrosterone) as a possible useful additional non-invasive measure for the investigation of long-term stress conditions. Finally, our results underline the importance of considering the affective dimension in welfare research. Recently, Judgement Bias Tests (JBT), which are based on the influence of affective states on the decision-making process, have been widely employed in animal welfare research. However, our studies show that the use of spatial JBT in horses can have some limitations. Still today several management systems do not fulfill species-specific needs of horses, thus the implementation of specific regulations could ameliorate horse welfare. A multidisciplinary approach to welfare assessment is fundamental, but it should be always remembered the individual and its own characteristics, which can influence not only physiological, immunological, and behavioural responses but also emotional and cognitive dimensions.
Resumo:
Synthetic chemists constantly strive to develop new methodologies to access complex molecules more sustainably. The recently developed photocatalytic approach results in a valid and greener alternative to the classical synthetic methods. Here we present three protocols to furnish five-membered rings exploiting photoredox catalysis. We firstly obtained 4,5-dihydrofurans (4,5-DHFs) from readily available olefins and α-haloketones employing fac-Ir(ppy)3 as a photocatalyst under blue-light irradiation (Figure 1, top). This transformation resulted very broad in scope, thanks to its mild conditions and the avoidance of stoichiometric amounts of oxidants or reductants. Moreover, similar conditions could lead to β,γ-unsaturated ketones, or highly substituted tetrahydrofurans (THFs) by carefully differentiating the substitution pattern on the starting materials and properly adjusting the reaction parameters. We then turned our attention to the reactivity of allenamides employing analogous photocatalytic conditions to access 2-aminofurans (Figure 1, bottom). α-Haloketones again provided the radical generated by fac-Ir(ppy)3 under visible-light irradiation, which added to the π-system and furnished the cyclic molecule. The addition of a second molecule of the α-haloketone moiety led to the formation of the final highly functionalized furan, which might be further elaborated to afford more complex products. The two works were both supplied with mechanistic investigations supported by experimental and computational methods. As our last project, we developed a methodology to achieve cypentanonyl-fused N-methylpyrrolidines (Figure 2), exploiting N,N-dimethylamines and carboxylic acids as radical sources. In two separated photocatalytic steps, both functionalities are manipulated through the photoredox catalysis by 4CzIPN to add to an α,β-enone system, furnishing the bicyclic product.
Resumo:
The impellent global environmental issues related to plastic materials can be addressed by following two different approaches: i) the development of synthetic strategies towards novel bio-based polymers, deriving from biomasses and thus identifiable as CO2-neutral materials, and ii) the development of new plastic materials, such as biocomposites, which are bio-based and biodegradable and therefore able to counteract the accumulation of plastic waste. In this framework, this dissertation presents extensive research efforts have been devoted to the synthesis and characterization of polyesters based on various bio-based monomers, including ω-pentadecalactone, vanillic acid, 2,5-furan dicarboxylic acid, and 5-hydroxymethylfurfural. With the aim of achieving high molecular weight polyesters, different synthetic strategies have been used as melt polycondensation, enzymatic polymerization, ring-opening polymerization and chain extension reaction. In particular, poly(ethylene vanillate) (PEV), poly(ω-pentadecalactone) (PPDL), poly(ethylene vanillate-co-pentadecalactone) (P(EV-co-PDL)), poly(2-hydroxymethyl 5-furancarboxylate) (PHMF), poly(ethylene 2,5-furandicarboxylate) (PEF) with different amount of diethylene glycol (DEG) unit amount, poly(propylene 2,5-furandicarboxylate) (PPF), poly(hexamethylene 2,5-furandicarboxylate), (PHF) have been prepared and extensively characterized. To improve the lacks of poly(hydroxybutyrate-co-valerate) (PHBV), its minimal formulations with natural additives and its blending with medium chain length PHAs (mcl-PHAs) have been tested. Additionally, this dissertation presents new biocomposites based on polylactic acid (PLA), poly(butylene succinate) (PBS), and PHBV, which are polymers both bio-based and biodegradable. To maintain their biodegradability only bio-fillers have been taken into account as reinforcing agents. Moreover, the commitment to sustainability has further limited the selection and led to the exclusive use of agricultural waste as fillers. Detailly, biocomposites have been obtained and discussed by using the following materials: PLA and agro-wastes like tree pruning, potato peels, and hay leftovers; PBS and exhausted non-compliant coffee green beans; PHBV and industrial starch extraction residues.
Resumo:
The current climate crisis requires a comprehensive understanding of biodiversity to acknowledge how ecosystems’ responses to anthropogenic disturbances may result in feedback that can either mitigate or exacerbate global warming. Although ecosystems are dynamic and macroecological patterns change drastically in response to disturbance, dynamic macroecology has received insufficient attention and theoretical formalisation. In this context, the maximum entropy principle (MaxEnt) could provide an effective inference procedure to study ecosystems. Since the improper usage of entropy outside its scope often leads to misconceptions, the opening chapter will clarify its meaning by following its evolution from classical thermodynamics to information theory. The second chapter introduces the study of ecosystems from a physicist’s viewpoint. In particular, the MaxEnt Theory of Ecology (METE) will be the cornerstone of the discussion. METE predicts the shapes of macroecological metrics in relatively static ecosystems using constraints imposed by static state variables. However, in disturbed ecosystems with macroscale state variables that change rapidly over time, its predictions tend to fail. In the final chapter, DynaMETE is therefore presented as an extension of METE from static to dynamic. By predicting how macroecological patterns are likely to change in response to perturbations, DynaMETE can contribute to a better understanding of disturbed ecosystems’ fate and the improvement of conservation and management of carbon sinks, like forests. Targeted strategies in ecosystem management are now indispensable to enhance the interdependence of human well-being and the health of ecosystems, thus avoiding climate change tipping points.
Resumo:
Vision systems are powerful tools playing an increasingly important role in modern industry, to detect errors and maintain product standards. With the enlarged availability of affordable industrial cameras, computer vision algorithms have been increasingly applied in industrial manufacturing processes monitoring. Until a few years ago, industrial computer vision applications relied only on ad-hoc algorithms designed for the specific object and acquisition setup being monitored, with a strong focus on co-designing the acquisition and processing pipeline. Deep learning has overcome these limits providing greater flexibility and faster re-configuration. In this work, the process to be inspected consists in vials’ pack formation entering a freeze-dryer, which is a common scenario in pharmaceutical active ingredient packaging lines. To ensure that the machine produces proper packs, a vision system is installed at the entrance of the freeze-dryer to detect eventual anomalies with execution times compatible with the production specifications. Other constraints come from sterility and safety standards required in pharmaceutical manufacturing. This work presents an overview about the production line, with particular focus on the vision system designed, and about all trials conducted to obtain the final performance. Transfer learning, alleviating the requirement for a large number of training data, combined with data augmentation methods, consisting in the generation of synthetic images, were used to effectively increase the performances while reducing the cost of data acquisition and annotation. The proposed vision algorithm is composed by two main subtasks, designed respectively to vials counting and discrepancy detection. The first one was trained on more than 23k vials (about 300 images) and tested on 5k more (about 75 images), whereas 60 training images and 52 testing images were used for the second one.
Resumo:
The 1d extended Hubbard model with soft-shoulder potential has proved itself
to be very difficult to study due its non solvability and to competition between terms of the Hamiltonian. Given this, we tried to investigate its phase diagram for filling n=2/5 and range of soft-shoulder potential r=2 by using Machine Learning techniques. That led to a rich phase diagram; calling U, V the parameters associated to the Hubbard potential and the soft-shoulder potential respectively, we found that for V<5 and U>3 the system is always in Tomonaga Luttinger Liquid phase, then becomes a Cluster Luttinger Liquid for 5
Resumo:
In recent times, a significant research effort has been focused on how deformable linear objects (DLOs) can be manipulated for real world applications such as assembly of wiring harnesses for the automotive and aerospace sector. This represents an open topic because of the difficulties in modelling accurately the behaviour of these objects and simulate a task involving their manipulation, considering a variety of different scenarios. These problems have led to the development of data-driven techniques in which machine learning techniques are exploited to obtain reliable solutions. However, this approach makes the solution difficult to be extended, since the learning must be replicated almost from scratch as the scenario changes. It follows that some model-based methodology must be introduced to generalize the results and reduce the training effort accordingly. The objective of this thesis is to develop a solution for the DLOs manipulation to assemble a wiring harness for the automotive sector based on adaptation of a base trajectory set by means of reinforcement learning methods. The idea is to create a trajectory planning software capable of solving the proposed task, reducing where possible the learning time, which is done in real time, but at the same time presenting suitable performance and reliability. The solution has been implemented on a collaborative 7-DOFs Panda robot at the Laboratory of Automation and Robotics of the University of Bologna. Experimental results are reported showing how the robot is capable of optimizing the manipulation of the DLOs gaining experience along the task repetition, but showing at the same time a high success rate from the very beginning of the learning phase.
Resumo:
Combinatorial decision and optimization problems belong to numerous applications, such as logistics and scheduling, and can be solved with various approaches. Boolean Satisfiability and Constraint Programming solvers are some of the most used ones and their performance is significantly influenced by the model chosen to represent a given problem. This has led to the study of model reformulation methods, one of which is tabulation, that consists in rewriting the expression of a constraint in terms of a table constraint. To apply it, one should identify which constraints can help and which can hinder the solving process. So far this has been performed by hand, for example in MiniZinc, or automatically with manually designed heuristics, in Savile Row. Though, it has been shown that the performances of these heuristics differ across problems and solvers, in some cases helping and in others hindering the solving procedure. However, recent works in the field of combinatorial optimization have shown that Machine Learning (ML) can be increasingly useful in the model reformulation steps. This thesis aims to design a ML approach to identify the instances for which Savile Row’s heuristics should be activated. Additionally, it is possible that the heuristics miss some good tabulation opportunities, so we perform an exploratory analysis for the creation of a ML classifier able to predict whether or not a constraint should be tabulated. The results reached towards the first goal show that a random forest classifier leads to an increase in the performances of 4 different solvers. The experimental results in the second task show that a ML approach could improve the performance of a solver for some problem classes.