841 resultados para Vehicle Routing Problem Multi-Trip Ricerca Operativa TSP VRP
Resumo:
The sustained demand for faster,more powerful chips has beenmet by the availability of chip manufacturing processes allowing for the integration of increasing numbers of computation units onto a single die. The resulting outcome, especially in the embedded domain, has often been called SYSTEM-ON-CHIP (SOC) or MULTI-PROCESSOR SYSTEM-ON-CHIP (MPSOC). MPSoC design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. NETWORKS-ON-CHIPS (NOCS) are the most comprehensive and scalable answer to this design concern. By bringing large-scale networking concepts to the on-chip domain, they guarantee a structured answer to present and future communication requirements. The point-to-point connection and packet switching paradigms they involve are also of great help in minimizing wiring overhead and physical routing issues. However, as with any technology of recent inception, NoC design is still an evolving discipline. Several main areas of interest require deep investigation for NoCs to become viable solutions: • The design of the NoC architecture needs to strike the best tradeoff among performance, features and the tight area and power constraints of the on-chip domain. • Simulation and verification infrastructure must be put in place to explore, validate and optimize the NoC performance. • NoCs offer a huge design space, thanks to their extreme customizability in terms of topology and architectural parameters. Design tools are needed to prune this space and pick the best solutions. • Even more so given their global, distributed nature, it is essential to evaluate the physical implementation of NoCs to evaluate their suitability for next-generation designs and their area and power costs. This dissertation focuses on all of the above points, by describing a NoC architectural implementation called ×pipes; a NoC simulation environment within a cycle-accurate MPSoC emulator called MPARM; a NoC design flow consisting of a front-end tool for optimal NoC instantiation, called SunFloor, and a set of back-end facilities for the study of NoC physical implementations. This dissertation proves the viability of NoCs for current and upcoming designs, by outlining their advantages (alongwith a fewtradeoffs) and by providing a full NoC implementation framework. It also presents some examples of additional extensions of NoCs, allowing e.g. for increased fault tolerance, and outlines where NoCsmay find further application scenarios, such as in stacked chips.
Resumo:
Recent progress in microelectronic and wireless communications have enabled the development of low cost, low power, multifunctional sensors, which has allowed the birth of new type of networks named wireless sensor networks (WSNs). The main features of such networks are: the nodes can be positioned randomly over a given field with a high density; each node operates both like sensor (for collection of environmental data) as well as transceiver (for transmission of information to the data retrieval); the nodes have limited energy resources. The use of wireless communications and the small size of nodes, make this type of networks suitable for a large number of applications. For example, sensor nodes can be used to monitor a high risk region, as near a volcano; in a hospital they could be used to monitor physical conditions of patients. For each of these possible application scenarios, it is necessary to guarantee a trade-off between energy consumptions and communication reliability. The thesis investigates the use of WSNs in two possible scenarios and for each of them suggests a solution that permits to solve relating problems considering the trade-off introduced. The first scenario considers a network with a high number of nodes deployed in a given geographical area without detailed planning that have to transmit data toward a coordinator node, named sink, that we assume to be located onboard an unmanned aerial vehicle (UAV). This is a practical example of reachback communication, characterized by the high density of nodes that have to transmit data reliably and efficiently towards a far receiver. It is considered that each node transmits a common shared message directly to the receiver onboard the UAV whenever it receives a broadcast message (triggered for example by the vehicle). We assume that the communication channels between the local nodes and the receiver are subject to fading and noise. The receiver onboard the UAV must be able to fuse the weak and noisy signals in a coherent way to receive the data reliably. It is proposed a cooperative diversity concept as an effective solution to the reachback problem. In particular, it is considered a spread spectrum (SS) transmission scheme in conjunction with a fusion center that can exploit cooperative diversity, without requiring stringent synchronization between nodes. The idea consists of simultaneous transmission of the common message among the nodes and a Rake reception at the fusion center. The proposed solution is mainly motivated by two goals: the necessity to have simple nodes (to this aim we move the computational complexity to the receiver onboard the UAV), and the importance to guarantee high levels of energy efficiency of the network, thus increasing the network lifetime. The proposed scheme is analyzed in order to better understand the effectiveness of the approach presented. The performance metrics considered are both the theoretical limit on the maximum amount of data that can be collected by the receiver, as well as the error probability with a given modulation scheme. Since we deal with a WSN, both of these performance are evaluated taking into consideration the energy efficiency of the network. The second scenario considers the use of a chain network for the detection of fires by using nodes that have a double function of sensors and routers. The first one is relative to the monitoring of a temperature parameter that allows to take a local binary decision of target (fire) absent/present. The second one considers that each node receives a decision made by the previous node of the chain, compares this with that deriving by the observation of the phenomenon, and transmits the final result to the next node. The chain ends at the sink node that transmits the received decision to the user. In this network the goals are to limit throughput in each sensor-to-sensor link and minimize probability of error at the last stage of the chain. This is a typical scenario of distributed detection. To obtain good performance it is necessary to define some fusion rules for each node to summarize local observations and decisions of the previous nodes, to get a final decision that it is transmitted to the next node. WSNs have been studied also under a practical point of view, describing both the main characteristics of IEEE802:15:4 standard and two commercial WSN platforms. By using a commercial WSN platform it is realized an agricultural application that has been tested in a six months on-field experimentation.
Resumo:
The inherent stochastic character of most of the physical quantities involved in engineering models has led to an always increasing interest for probabilistic analysis. Many approaches to stochastic analysis have been proposed. However, it is widely acknowledged that the only universal method available to solve accurately any kind of stochastic mechanics problem is Monte Carlo Simulation. One of the key parts in the implementation of this technique is the accurate and efficient generation of samples of the random processes and fields involved in the problem at hand. In the present thesis an original method for the simulation of homogeneous, multi-dimensional, multi-variate, non-Gaussian random fields is proposed. The algorithm has proved to be very accurate in matching both the target spectrum and the marginal probability. The computational efficiency and robustness are very good too, even when dealing with strongly non-Gaussian distributions. What is more, the resulting samples posses all the relevant, welldefined and desired properties of “translation fields”, including crossing rates and distributions of extremes. The topic of the second part of the thesis lies in the field of non-destructive parametric structural identification. Its objective is to evaluate the mechanical characteristics of constituent bars in existing truss structures, using static loads and strain measurements. In the cases of missing data and of damages that interest only a small portion of the bar, Genetic Algorithm have proved to be an effective tool to solve the problem.
Resumo:
CHAPTER 1:FLUID-VISCOUS DAMPERS In this chapter the fluid-viscous dampers are introduced. The first section is focused on the technical characteristics of these devices, their mechanical behavior and the latest evolution of the technology whose they are equipped. In the second section we report the definitions and the guide lines about the design of these devices included in some international codes. In the third section the results of some experimental tests carried out by some authors on the response of these devices to external forces are discussed. On this purpose we report some technical schedules that are usually enclosed to the devices now available on the international market. In the third section we show also some analytic models proposed by various authors, which are able to describe efficiently the physical behavior of the fluid-viscous dampers. In the last section we propose some cases of application of these devices on existing structures and on new-construction structures. We show also some cases in which these devices have been revealed good for aims that lies outside the reduction of seismic actions on the structures. CHAPTER 2:DESIGN METHODS PROPOSED IN LITERATURE In this chapter the more widespread design methods proposed in literature for structures equipped by fluid-viscous dampers are introduced. In the first part the response of sdf systems in the case of harmonic external force is studied, in the last part the response in the case of random external force is discussed. In the first section the equations of motion in the case of an elastic-linear sdf system equipped with a non-linear fluid-viscous damper undergoing a harmonic force are introduced. This differential problem is analytically quite complex and it’s not possible to be solved in a closed form. Therefore some authors have proposed approximate solution methods. The more widespread methods are based on equivalence principles between a non-linear device and an equivalent linear one. Operating in this way it is possible to define an equivalent damping ratio and the problem becomes linear; the solution of the equivalent problem is well-known. In the following section two techniques of linearization, proposed by some authors in literature, are described: the first technique is based on the equivalence of the energy dissipated by the two devices and the second one is based on the equivalence of power consumption. After that we compare these two techniques by studying the response of a sdf system undergoing a harmonic force. By introducing the equivalent damping ratio we can write the equation of motion of the non-linear differential problem in an implicit form, by dividing, as usual, for the mass of the system. In this way, we get a reduction of the number of variables, by introducing the natural frequency of the system. The equation of motion written in this form has two important properties: the response is linear dependent on the amplitude of the external force and the response is dependent on the ratio of the frequency of the external harmonic force and the natural frequency of the system only, and not on their single values. All these considerations, in the last section, are extended to the case of a random external force. CHAPTER 3: DESIGN METHOD PROPOSED In this chapter the theoretical basis of the design method proposed are introduced. The need to propose a new design method for structures equipped with fluid-viscous dampers arises from the observation that the methods reported in literature are always iterative, because the response affects some parameters included in the equation of motion (such as the equivalent damping ratio). In the first section the dimensionless parameterε is introduced. This parameter has been obtained from the definition of equivalent damping ratio. The implicit form of the equation of motion is written by introducing the parameter ε, instead of the equivalent damping ratio. This new implicit equation of motions has not any terms affected by the response, so that once ε is known the response can be evaluated directly. In the second section it is discussed how the parameter ε affects some characteristics of the response: drift, velocity and base shear. All the results described till this point have been obtained by keeping the non-linearity of the behavior of the dampers. In order to get a linear formulation of the problem, that is possible to solve by using the well-known methods of the dynamics of structures, as we did before for the iterative methods by introducing the equivalent damping ratio, it is shown how the equivalent damping ratio can be evaluated from knowing the value of ε. Operating in this way, once the parameter ε is known, it is quite easy to estimate the equivalent damping ratio and to proceed with a classic linear analysis. In the last section it is shown how the parameter ε could be taken as reference for the evaluation of the convenience of using non-linear dampers instead of linear ones on the basis of the type of external force and the characteristics of the system. CHAPTER 4: MULTI-DEGREE OF FREEDOM SYSTEMS In this chapter the design methods of a elastic-linear mdf system equipped with non-linear fluidviscous dampers are introduced. It has already been shown that, in the sdf systems, the response of the structure can be evaluated through the estimation of the equivalent damping ratio (ξsd) assuming the behavior of the structure elastic-linear. We would to mention that some adjusting coefficients, to be applied to the equivalent damping ratio in order to consider the actual behavior of the structure (that is non-linear), have already been proposed in literature; such coefficients are usually expressed in terms of ductility, but their treatment is over the aims of this thesis and we does not go into further. The method usually proposed in literature is based on energy equivalence: even though this procedure has solid theoretical basis, it must necessary include some iterative process, because the expression of the equivalent damping ratio contains a term of the response. This procedure has been introduced primarily by Ramirez, Constantinou et al. in 2000. This procedure is reported in the first section and it is defined “Iterative Method”. Following the guide lines about sdf systems reported in the previous chapters, it is introduced a procedure for the assessment of the parameter ε in the case of mdf systems. Operating in this way the evaluation of the equivalent damping ratio (ξsd) can be done directly without implementing iterative processes. This procedure is defined “Direct Method” and it is reported in the second section. In the third section the two methods are analyzed by studying 4 cases of two moment-resisting steel frames undergoing real accelerogramms: the response of the system calculated by using the two methods is compared with the numerical response obtained from the software called SAP2000-NL, CSI product. In the last section a procedure to create spectra of the equivalent damping ratio, affected by the parameter ε and the natural period of the system for a fixed value of exponent α, starting from the elasticresponse spectra provided by any international code, is introduced.
Resumo:
Technology scaling increasingly emphasizes complexity and non-ideality of the electrical behavior of semiconductor devices and boosts interest on alternatives to the conventional planar MOSFET architecture. TCAD simulation tools are fundamental to the analysis and development of new technology generations. However, the increasing device complexity is reflected in an augmented dimensionality of the problems to be solved. The trade-off between accuracy and computational cost of the simulation is especially influenced by domain discretization: mesh generation is therefore one of the most critical steps and automatic approaches are sought. Moreover, the problem size is further increased by process variations, calling for a statistical representation of the single device through an ensemble of microscopically different instances. The aim of this thesis is to present multi-disciplinary approaches to handle this increasing problem dimensionality in a numerical simulation perspective. The topic of mesh generation is tackled by presenting a new Wavelet-based Adaptive Method (WAM) for the automatic refinement of 2D and 3D domain discretizations. Multiresolution techniques and efficient signal processing algorithms are exploited to increase grid resolution in the domain regions where relevant physical phenomena take place. Moreover, the grid is dynamically adapted to follow solution changes produced by bias variations and quality criteria are imposed on the produced meshes. The further dimensionality increase due to variability in extremely scaled devices is considered with reference to two increasingly critical phenomena, namely line-edge roughness (LER) and random dopant fluctuations (RD). The impact of such phenomena on FinFET devices, which represent a promising alternative to planar CMOS technology, is estimated through 2D and 3D TCAD simulations and statistical tools, taking into account matching performance of single devices as well as basic circuit blocks such as SRAMs. Several process options are compared, including resist- and spacer-defined fin patterning as well as different doping profile definitions. Combining statistical simulations with experimental data, potentialities and shortcomings of the FinFET architecture are analyzed and useful design guidelines are provided, which boost feasibility of this technology for mainstream applications in sub-45 nm generation integrated circuits.
Resumo:
The vast majority of known proteins have not yet been experimentally characterized and little is known about their function. The design and implementation of computational tools can provide insight into the function of proteins based on their sequence, their structure, their evolutionary history and their association with other proteins. Knowledge of the three-dimensional (3D) structure of a protein can lead to a deep understanding of its mode of action and interaction, but currently the structures of <1% of sequences have been experimentally solved. For this reason, it became urgent to develop new methods that are able to computationally extract relevant information from protein sequence and structure. The starting point of my work has been the study of the properties of contacts between protein residues, since they constrain protein folding and characterize different protein structures. Prediction of residue contacts in proteins is an interesting problem whose solution may be useful in protein folding recognition and de novo design. The prediction of these contacts requires the study of the protein inter-residue distances related to the specific type of amino acid pair that are encoded in the so-called contact map. An interesting new way of analyzing those structures came out when network studies were introduced, with pivotal papers demonstrating that protein contact networks also exhibit small-world behavior. In order to highlight constraints for the prediction of protein contact maps and for applications in the field of protein structure prediction and/or reconstruction from experimentally determined contact maps, I studied to which extent the characteristic path length and clustering coefficient of the protein contacts network are values that reveal characteristic features of protein contact maps. Provided that residue contacts are known for a protein sequence, the major features of its 3D structure could be deduced by combining this knowledge with correctly predicted motifs of secondary structure. In the second part of my work I focused on a particular protein structural motif, the coiled-coil, known to mediate a variety of fundamental biological interactions. Coiled-coils are found in a variety of structural forms and in a wide range of proteins including, for example, small units such as leucine zippers that drive the dimerization of many transcription factors or more complex structures such as the family of viral proteins responsible for virus-host membrane fusion. The coiled-coil structural motif is estimated to account for 5-10% of the protein sequences in the various genomes. Given their biological importance, in my work I introduced a Hidden Markov Model (HMM) that exploits the evolutionary information derived from multiple sequence alignments, to predict coiled-coil regions and to discriminate coiled-coil sequences. The results indicate that the new HMM outperforms all the existing programs and can be adopted for the coiled-coil prediction and for large-scale genome annotation. Genome annotation is a key issue in modern computational biology, being the starting point towards the understanding of the complex processes involved in biological networks. The rapid growth in the number of protein sequences and structures available poses new fundamental problems that still deserve an interpretation. Nevertheless, these data are at the basis of the design of new strategies for tackling problems such as the prediction of protein structure and function. Experimental determination of the functions of all these proteins would be a hugely time-consuming and costly task and, in most instances, has not been carried out. As an example, currently, approximately only 20% of annotated proteins in the Homo sapiens genome have been experimentally characterized. A commonly adopted procedure for annotating protein sequences relies on the "inheritance through homology" based on the notion that similar sequences share similar functions and structures. This procedure consists in the assignment of sequences to a specific group of functionally related sequences which had been grouped through clustering techniques. The clustering procedure is based on suitable similarity rules, since predicting protein structure and function from sequence largely depends on the value of sequence identity. However, additional levels of complexity are due to multi-domain proteins, to proteins that share common domains but that do not necessarily share the same function, to the finding that different combinations of shared domains can lead to different biological roles. In the last part of this study I developed and validate a system that contributes to sequence annotation by taking advantage of a validated transfer through inheritance procedure of the molecular functions and of the structural templates. After a cross-genome comparison with the BLAST program, clusters were built on the basis of two stringent constraints on sequence identity and coverage of the alignment. The adopted measure explicity answers to the problem of multi-domain proteins annotation and allows a fine grain division of the whole set of proteomes used, that ensures cluster homogeneity in terms of sequence length. A high level of coverage of structure templates on the length of protein sequences within clusters ensures that multi-domain proteins when present can be templates for sequences of similar length. This annotation procedure includes the possibility of reliably transferring statistically validated functions and structures to sequences considering information available in the present data bases of molecular functions and structures.
Resumo:
The scale down of transistor technology allows microelectronics manufacturers such as Intel and IBM to build always more sophisticated systems on a single microchip. The classical interconnection solutions based on shared buses or direct connections between the modules of the chip are becoming obsolete as they struggle to sustain the increasing tight bandwidth and latency constraints that these systems demand. The most promising solution for the future chip interconnects are the Networks on Chip (NoC). NoCs are network composed by routers and channels used to inter- connect the different components installed on the single microchip. Examples of advanced processors based on NoC interconnects are the IBM Cell processor, composed by eight CPUs that is installed on the Sony Playstation III and the Intel Teraflops pro ject composed by 80 independent (simple) microprocessors. On chip integration is becoming popular not only in the Chip Multi Processor (CMP) research area but also in the wider and more heterogeneous world of Systems on Chip (SoC). SoC comprehend all the electronic devices that surround us such as cell-phones, smart-phones, house embedded systems, automotive systems, set-top boxes etc... SoC manufacturers such as ST Microelectronics , Samsung, Philips and also Universities such as Bologna University, M.I.T., Berkeley and more are all proposing proprietary frameworks based on NoC interconnects. These frameworks help engineers in the switch of design methodology and speed up the development of new NoC-based systems on chip. In this Thesis we propose an introduction of CMP and SoC interconnection networks. Then focusing on SoC systems we propose: • a detailed analysis based on simulation of the Spidergon NoC, a ST Microelectronics solution for SoC interconnects. The Spidergon NoC differs from many classical solutions inherited from the parallel computing world. Here we propose a detailed analysis of this NoC topology and routing algorithms. Furthermore we propose aEqualized a new routing algorithm designed to optimize the use of the resources of the network while also increasing its performance; • a methodology flow based on modified publicly available tools that combined can be used to design, model and analyze any kind of System on Chip; • a detailed analysis of a ST Microelectronics-proprietary transport-level protocol that the author of this Thesis helped developing; • a simulation-based comprehensive comparison of different network interface designs proposed by the author and the researchers at AST lab, in order to integrate shared-memory and message-passing based components on a single System on Chip; • a powerful and flexible solution to address the time closure exception issue in the design of synchronous Networks on Chip. Our solution is based on relay stations repeaters and allows to reduce the power and area demands of NoC interconnects while also reducing its buffer needs; • a solution to simplify the design of the NoC by also increasing their performance and reducing their power and area consumption. We propose to replace complex and slow virtual channel-based routers with multiple and flexible small Multi Plane ones. This solution allows us to reduce the area and power dissipation of any NoC while also increasing its performance especially when the resources are reduced. This Thesis has been written in collaboration with the Advanced System Technology laboratory in Grenoble France, and the Computer Science Department at Columbia University in the city of New York.
Resumo:
Which event study methods are best in non-U.S. multi-country samples? Nonparametric tests, especially the rank and generalized sign, are better specified and more powerful than common parametric tests, especially in multi-day windows. The generalized sign test is the best statistic but must be applied to buy-and-hold abnormal returns for correct specification. Market-adjusted and market-model methods with local market indexes, without conversion to a common currency, work well. The results are robust to limiting the samples to situations expected to be problematic for test specification or power. Applying the tests that perform best in simulation to merger announcements produces reasonable results.
Resumo:
As distributed collaborative applications and architectures are adopting policy based management for tasks such as access control, network security and data privacy, the management and consolidation of a large number of policies is becoming a crucial component of such policy based systems. In large-scale distributed collaborative applications like web services, there is the need of analyzing policy interactions and integrating policies. In this thesis, we propose and implement EXAM-S, a comprehensive environment for policy analysis and management, which can be used to perform a variety of functions such as policy property analyses, policy similarity analysis, policy integration etc. As part of this environment, we have proposed and implemented new techniques for the analysis of policies that rely on a deep study of state of the art techniques. Moreover, we propose an approach for solving heterogeneity problems that usually arise when considering the analysis of policies belonging to different domains. Our work focuses on analysis of access control policies written in the dialect of XACML (Extensible Access Control Markup Language). We consider XACML policies because XACML is a rich language which can represent many policies of interest to real world applications and is gaining widespread adoption in the industry.
Resumo:
Many energetic and environmental evaluations need appropriate meteorological data, as input to analysis and prevision softwares. In Italy there aren't adeguate meteorological data because, in many cases, they are incomplete, incorrect and also very expensive for a long-term analysis (that needs multi-year data sets). A possible solution to this problem is the use of a Typical Meteorological Year (TRY), generated for specific applications. Nowadays the TRYs have been created, using statistical criteria, just for the analysis of solar energy systems and for predicting the thermal performance of buildings, applying it also to the study of photovoltaic plants (PV), though not specifically created for this type of application. The present research has defined the methodology for the creation of TRYs for different applications. In particular TRYs for environmental and wind plant analysis have been created. This is the innovative aspect of this research, never explored before. In additions, the methodology of the generation for the PV TRYs has been improved. The results are very good and the TRYs generated for these applications are adeguate to characterize the climatic condition of the place over a long period and can be used for energetic and environmental studies.
Resumo:
Nowadays, computing is migrating from traditional high performance and distributed computing to pervasive and utility computing based on heterogeneous networks and clients. The current trend suggests that future IT services will rely on distributed resources and on fast communication of heterogeneous contents. The success of this new range of services is directly linked to the effectiveness of the infrastructure in delivering them. The communication infrastructure will be the aggregation of different technologies even though the current trend suggests the emergence of single IP based transport service. Optical networking is a key technology to answer the increasing requests for dynamic bandwidth allocation and configure multiple topologies over the same physical layer infrastructure, optical networks today are still “far” from accessible from directly configure and offer network services and need to be enriched with more “user oriented” functionalities. However, current Control Plane architectures only facilitate efficient end-to-end connectivity provisioning and certainly cannot meet future network service requirements, e.g. the coordinated control of resources. The overall objective of this work is to provide the network with the improved usability and accessibility of the services provided by the Optical Network. More precisely, the definition of a service-oriented architecture is the enable technology to allow user applications to gain benefit of advanced services over an underlying dynamic optical layer. The definition of a service oriented networking architecture based on advanced optical network technologies facilitates users and applications access to abstracted levels of information regarding offered advanced network services. This thesis faces the problem to define a Service Oriented Architecture and its relevant building blocks, protocols and languages. In particular, this work has been focused on the use of the SIP protocol as a inter-layers signalling protocol which defines the Session Plane in conjunction with the Network Resource Description language. On the other hand, an advantage optical network must accommodate high data bandwidth with different granularities. Currently, two main technologies are emerging promoting the development of the future optical transport network, Optical Burst and Packet Switching. Both technologies respectively promise to provide all optical burst or packet switching instead of the current circuit switching. However, the electronic domain is still present in the scheduler forwarding and routing decision. Because of the high optics transmission frequency the burst or packet scheduler faces a difficult challenge, consequentially, high performance and time focused design of both memory and forwarding logic is need. This open issue has been faced in this thesis proposing an high efficiently implementation of burst and packet scheduler. The main novelty of the proposed implementation is that the scheduling problem has turned into simple calculation of a min/max function and the function complexity is almost independent of on the traffic conditions.
Resumo:
This thesis presents a creative and practical approach to dealing with the problem of selection bias. Selection bias may be the most important vexing problem in program evaluation or in any line of research that attempts to assert causality. Some of the greatest minds in economics and statistics have scrutinized the problem of selection bias, with the resulting approaches – Rubin’s Potential Outcome Approach(Rosenbaum and Rubin,1983; Rubin, 1991,2001,2004) or Heckman’s Selection model (Heckman, 1979) – being widely accepted and used as the best fixes. These solutions to the bias that arises in particular from self selection are imperfect, and many researchers, when feasible, reserve their strongest causal inference for data from experimental rather than observational studies. The innovative aspect of this thesis is to propose a data transformation that allows measuring and testing in an automatic and multivariate way the presence of selection bias. The approach involves the construction of a multi-dimensional conditional space of the X matrix in which the bias associated with the treatment assignment has been eliminated. Specifically, we propose the use of a partial dependence analysis of the X-space as a tool for investigating the dependence relationship between a set of observable pre-treatment categorical covariates X and a treatment indicator variable T, in order to obtain a measure of bias according to their dependence structure. The measure of selection bias is then expressed in terms of inertia due to the dependence between X and T that has been eliminated. Given the measure of selection bias, we propose a multivariate test of imbalance in order to check if the detected bias is significant, by using the asymptotical distribution of inertia due to T (Estadella et al. 2005) , and by preserving the multivariate nature of data. Further, we propose the use of a clustering procedure as a tool to find groups of comparable units on which estimate local causal effects, and the use of the multivariate test of imbalance as a stopping rule in choosing the best cluster solution set. The method is non parametric, it does not call for modeling the data, based on some underlying theory or assumption about the selection process, but instead it calls for using the existing variability within the data and letting the data to speak. The idea of proposing this multivariate approach to measure selection bias and test balance comes from the consideration that in applied research all aspects of multivariate balance, not represented in the univariate variable- by-variable summaries, are ignored. The first part contains an introduction to evaluation methods as part of public and private decision process and a review of the literature of evaluation methods. The attention is focused on Rubin Potential Outcome Approach, matching methods, and briefly on Heckman’s Selection Model. The second part focuses on some resulting limitations of conventional methods, with particular attention to the problem of how testing in the correct way balancing. The third part contains the original contribution proposed , a simulation study that allows to check the performance of the method for a given dependence setting and an application to a real data set. Finally, we discuss, conclude and explain our future perspectives.
Resumo:
Selective oxidation is one of the simplest functionalization methods and essentially all monomers used in manufacturing artificial fibers and plastics are obtained by catalytic oxidation processes. Formally, oxidation is considered as an increase in the oxidation number of the carbon atoms, then reactions such as dehydrogenation, ammoxidation, cyclization or chlorination are all oxidation reactions. In this field, most of processes for the synthesis of important chemicals used vanadium oxide-based catalysts. These catalytic systems are used either in the form of multicomponent mixed oxides and oxysalts, e.g., in the oxidation of n-butane (V/P/O) and of benzene (supported V/Mo/O) to maleic anhydride, or in the form of supported metal oxide, e.g., in the manufacture of phthalic anhydride by o-xylene oxidation, of sulphuric acid by oxidation of SO2, in the reduction of NOx with ammonia and in the ammoxidation of alkyl aromatics. In addition, supported vanadia catalysts have also been investigated for the oxidative dehydrogenation of alkanes to olefins , oxidation of pentane to maleic anhydride and the selective oxidation of methanol to formaldehyde or methyl formate [1]. During my PhD I focused my work on two gas phase selective oxidation reactions. The work was done at the Department of Industrial Chemistry and Materials (University of Bologna) in collaboration with Polynt SpA. Polynt is a leader company in the development, production and marketing of catalysts for gas-phase oxidation. In particular, I studied the catalytic system for n-butane oxidation to maleic anhydride (fluid bed technology) and for o-xylene oxidation to phthalic anhydride. Both reactions are catalyzed by systems based on vanadium, but catalysts are completely different. Part A is dedicated to the study of V/P/O catalyst for n-butane selective oxidation, while in the Part B the results of an investigation on TiO2-supported V2O5, catalyst for o-xylene oxidation are showed. In Part A, a general introduction about the importance of maleic anhydride, its uses, the industrial processes and the catalytic system are reported. The reaction is the only industrial direct oxidation of paraffins to a chemical intermediate. It is produced by n-butane oxidation either using fixed bed and fluid bed technology; in both cases the catalyst is the vanadyl pyrophosphate (VPP). Notwithstanding the good performances, the yield value didn’t exceed 60% and the system is continuously studied to improve activity and selectivity. The main open problem is the understanding of the real active phase working under reaction conditions. Several articles deal with the role of different crystalline and/or amorphous vanadium/phosphorous (VPO) compounds. In all cases, bulk VPP is assumed to constitute the core of the active phase, while two different hypotheses have been formulated concerning the catalytic surface. In one case the development of surface amorphous layers that play a direct role in the reaction is described, in the second case specific planes of crystalline VPP are assumed to contribute to the reaction pattern, and the redox process occurs reversibly between VPP and VOPO4. Both hypotheses are supported also by in-situ characterization techniques, but the experiments were performed with different catalysts and probably under slightly different working conditions. Due to complexity of the system, these differences could be the cause of the contradictions present in literature. Supposing that a key role could be played by P/V ratio, I prepared, characterized and tested two samples with different P/V ratio. Transformation occurring on catalytic surfaces under different conditions of temperature and gas-phase composition were studied by means of in-situ Raman spectroscopy, trying to investigate the changes that VPP undergoes during reaction. The goal is to understand which kind of compound constituting the catalyst surface is the most active and selective for butane oxidation reaction, and also which features the catalyst should possess to ensure the development of this surface (e.g. catalyst composition). On the basis of results from this study, it could be possible to project a new catalyst more active and selective with respect to the present ones. In fact, the second topic investigated is the possibility to reproduce the surface active layer of VPP onto a support. In general, supportation is a way to improve mechanical features of the catalysts and to overcome problems such as possible development of local hot spot temperatures, which could cause a decrease of selectivity at high conversion, and high costs of catalyst. In literature it is possible to find different works dealing with the development of supported catalysts, but in general intrinsic characteristics of VPP are worsened due to the chemical interaction between active phase and support. Moreover all these works deal with the supportation of VPP; on the contrary, my work is an attempt to build-up a V/P/O active layer on the surface of a zirconia support by thermal treatment of a precursor obtained by impregnation of a V5+ salt and of H3PO4. In-situ Raman analysis during the thermal treatment, as well as reactivity tests are used to investigate the parameters that may influence the generation of the active phase. Part B is devoted to the study of o-xylene oxidation of phthalic anhydride; industrially, the reaction is carried out in gas-phase using as catalysts a supported system formed by V2O5 on TiO2. The V/Ti/O system is quite complex; different vanadium species could be present on the titania surface, as a function of the vanadium content and of the titania surface area: (i) V species which is chemically bound to the support via oxo bridges (isolated V in octahedral or tetrahedral coordination, depending on the hydration degree), (ii) a polymeric species spread over titania, and (iii) bulk vanadium oxide, either amorphous or crystalline. The different species could have different catalytic properties therefore changing the relative amount of V species can be a way to optimize the catalytic performances of the system. For this reason, samples containing increasing amount of vanadium were prepared and tested in the oxidation of o-xylene, with the aim of find a correlations between V/Ti/O catalytic activity and the amount of the different vanadium species. The second part deals with the role of a gas-phase promoter. Catalytic surface can change under working conditions; the high temperatures and a different gas-phase composition could have an effect also on the formation of different V species. Furthermore, in the industrial practice, the vanadium oxide-based catalysts need the addition of gas-phase promoters in the feed stream, that although do not have a direct role in the reaction stoichiometry, when present leads to considerable improvement of catalytic performance. Starting point of my investigation is the possibility that steam, a component always present in oxidation reactions environment, could cause changes in the nature of catalytic surface under reaction conditions. For this reason, the dynamic phenomena occurring at the surface of a 7wt% V2O5 on TiO2 catalyst in the presence of steam is investigated by means of Raman spectroscopy. Moreover a correlation between the amount of the different vanadium species and catalytic performances have been searched. Finally, the role of dopants has been studied. The industrial V/Ti/O system contains several dopants; the nature and the relative amount of promoters may vary depending on catalyst supplier and on the technology employed for the process, either a single-bed or a multi-layer catalytic fixed-bed. Promoters have a quite remarkable effect on both activity and selectivity to phthalic anhydride. Their role is crucial, and the proper control of the relative amount of each component is fundamental for the process performance. Furthermore, it can not be excluded that the same promoter may play different role depending on reaction conditions (T, composition of gas phase..). The reaction network of phthalic anhydride formation is very complex and includes several parallel and consecutive reactions; for this reason a proper understanding of the role of each dopant cannot be separated from the analysis of the reaction scheme. One of the most important promoters at industrial level, which is always present in the catalytic formulations is Cs. It is known that Cs plays an important role on selectivity to phthalic anhydride, but the reasons of this phenomenon are not really clear. Therefore the effect of Cs on the reaction scheme has been investigated at two different temperature with the aim of evidencing in which step of the reaction network this promoter plays its role.
Resumo:
Alzheimer's disease (AD) and cancer represent two of the main causes of death worldwide. They are complex multifactorial diseases and several biochemical targets have been recognized to play a fundamental role in their development. Basing on their complex nature, a promising therapeutical approach could be represented by the so-called "Multi-Target-Directed Ligand" approach. This new strategy is based on the assumption that a single molecule could hit several targets responsible for the onset and/or progression of the pathology. In particular in AD, most currently prescribed drugs aim to increase the level of acetylcholine in the brain by inhibiting the enzyme acetylcholinesterase (AChE). However, clinical experience shows that AChE inhibition is a palliative treatment, and the simple modulation of a single target does not address AD aetiology. Research into newer and more potent anti-AD agents is thus focused on compounds whose properties go beyond AChE inhibition (such as inhibition of the enzyme β-secretase and inhibition of the aggregation of beta-amyloid). Therefore, the MTDL strategy seems a more appropriate approach for addressing the complexity of AD and may provide new drugs for tackling its multifactorial nature. In this thesis, it is described the design of new MTDLs able to tackle the multifactorial nature of AD. Such new MTDLs designed are less flexible analogues of Caproctamine, one of the first MTDL owing biological properties useful for the AD treatment. These new compounds are able to inhibit the enzymes AChE, beta-secretase and to inhibit both AChE-induced and self-induced beta-amyloid aggregation. In particular, the most potent compound of the series is able to inhibit AChE in subnanomolar range, to inhibit β-secretase in micromolar concentration and to inhibit both AChE-induced and self-induced beta-amyloid aggregation in micromolar concentration. Cancer, as AD, is a very complex pathology and many different therapeutical approaches are currently use for the treatment of such pathology. However, due to its multifactorial nature the MTDL approach could be, in principle, apply also to this pathology. Aim of this thesis has been the development of new molecules owing different structural motifs able to simultaneously interact with some of the multitude of targets responsible for the pathology. The designed compounds displayed cytotoxic activity in different cancer cell lines. In particular, the most potent compounds of the series have been further evaluated and they were able to bind DNA resulting 100-fold more potent than the reference compound Mitonafide. Furthermore, these compounds were able to trigger apoptosis through caspases activation and to inhibit PIN1 (preliminary result). This last protein is a very promising target because it is overexpressed in many human cancers, it functions as critical catalyst for multiple oncogenic pathways and in several cancer cell lines depletion of PIN1 determines arrest of mitosis followed by apoptosis induction. In conclusion, this study may represent a promising starting pint for the development of new MTDLs hopefully useful for cancer and AD treatment.