948 resultados para Integration And Modeling


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Programa de doctorado en Oceanografía

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Many combinatorial problems coming from the real world may not have a clear and well defined structure, typically being dirtied by side constraints, or being composed of two or more sub-problems, usually not disjoint. Such problems are not suitable to be solved with pure approaches based on a single programming paradigm, because a paradigm that can effectively face a problem characteristic may behave inefficiently when facing other characteristics. In these cases, modelling the problem using different programming techniques, trying to ”take the best” from each technique, can produce solvers that largely dominate pure approaches. We demonstrate the effectiveness of hybridization and we discuss about different hybridization techniques by analyzing two classes of problems with particular structures, exploiting Constraint Programming and Integer Linear Programming solving tools and Algorithm Portfolios and Logic Based Benders Decomposition as integration and hybridization frameworks.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recently, a rising interest in political and economic integration/disintegration issues has been developed in the political economy field. This growing strand of literature partly draws on traditional issues of fiscal federalism and optimum public good provision and focuses on a trade-off between the benefits of centralization, arising from economies of scale or externalities, and the costs of harmonizing policies as a consequence of the increased heterogeneity of individual preferences in an international union or in a country composed of at least two regions. This thesis stems from this strand of literature and aims to shed some light on two highly relevant aspects of the political economy of European integration. The first concerns the role of public opinion in the integration process; more precisely, how economic benefits and costs of integration shape citizens' support for European Union (EU) membership. The second is the allocation of policy competences among different levels of government: European, national and regional. Chapter 1 introduces the topics developed in this thesis by reviewing the main recent theoretical developments in the political economy analysis of integration processes. It is structured as follows. First, it briefly surveys a few relevant articles on economic theories of integration and disintegration processes (Alesina and Spolaore 1997, Bolton and Roland 1997, Alesina et al. 2000, Casella and Feinstein 2002) and discusses their relevance for the study of the impact of economic benefits and costs on public opinion attitude towards the EU. Subsequently, it explores the links existing between such political economy literature and theories of fiscal federalism, especially with regard to normative considerations concerning the optimal allocation of competences in a union. Chapter 2 firstly proposes a model of citizens’ support for membership of international unions, with explicit reference to the EU; subsequently it tests the model on a panel of EU countries. What are the factors that influence public opinion support for the European Union (EU)? In international relations theory, the idea that citizens' support for the EU depends on material benefits deriving from integration, i.e. whether European integration makes individuals economically better off (utilitarian support), has been common since the 1970s, but has never been the subject of a formal treatment (Hix 2005). A small number of studies in the 1990s have investigated econometrically the link between national economic performance and mass support for European integration (Eichenberg and Dalton 1993; Anderson and Kalthenthaler 1996), but only making informal assumptions. The main aim of Chapter 2 is thus to propose and test our model with a view to providing a more complete and theoretically grounded picture of public support for the EU. Following theories of utilitarian support, we assume that citizens are in favour of membership if they receive economic benefits from it. To develop this idea, we propose a simple political economic model drawing on the recent economic literature on integration and disintegration processes. The basic element is the existence of a trade-off between the benefits of centralisation and the costs of harmonising policies in presence of heterogeneous preferences among countries. The approach we follow is that of the recent literature on the political economy of international unions and the unification or break-up of nations (Bolton and Roland 1997, Alesina and Wacziarg 1999, Alesina et al. 2001, 2005a, to mention only the relevant). The general perspective is that unification provides returns to scale in the provision of public goods, but reduces each member state’s ability to determine its most favoured bundle of public goods. In the simple model presented in Chapter 2, support for membership of the union is increasing in the union’s average income and in the loss of efficiency stemming from being outside the union, and decreasing in a country’s average income, while increasing heterogeneity of preferences among countries points to a reduced scope of the union. Afterwards we empirically test the model with data on the EU; more precisely, we perform an econometric analysis employing a panel of member countries over time. The second part of Chapter 2 thus tries to answer the following question: does public opinion support for the EU really depend on economic factors? The findings are broadly consistent with our theoretical expectations: the conditions of the national economy, differences in income among member states and heterogeneity of preferences shape citizens’ attitude towards their country’s membership of the EU. Consequently, this analysis offers some interesting policy implications for the present debate about ratification of the European Constitution and, more generally, about how the EU could act in order to gain more support from the European public. Citizens in many member states are called to express their opinion in national referenda, which may well end up in rejection of the Constitution, as recently happened in France and the Netherlands, triggering a European-wide political crisis. These events show that nowadays understanding public attitude towards the EU is not only of academic interest, but has a strong relevance for policy-making too. Chapter 3 empirically investigates the link between European integration and regional autonomy in Italy. Over the last few decades, the double tendency towards supranationalism and regional autonomy, which has characterised some European States, has taken a very interesting form in this country, because Italy, besides being one of the founding members of the EU, also implemented a process of decentralisation during the 1970s, further strengthened by a constitutional reform in 2001. Moreover, the issue of the allocation of competences among the EU, the Member States and the regions is now especially topical. The process leading to the drafting of European Constitution (even if then it has not come into force) has attracted much attention from a constitutional political economy perspective both on a normative and positive point of view (Breuss and Eller 2004, Mueller 2005). The Italian parliament has recently passed a new thorough constitutional reform, still to be approved by citizens in a referendum, which includes, among other things, the so called “devolution”, i.e. granting the regions exclusive competence in public health care, education and local police. Following and extending the methodology proposed in a recent influential article by Alesina et al. (2005b), which only concentrated on the EU activity (treaties, legislation, and European Court of Justice’s rulings), we develop a set of quantitative indicators measuring the intensity of the legislative activity of the Italian State, the EU and the Italian regions from 1973 to 2005 in a large number of policy categories. By doing so, we seek to answer the following broad questions. Are European and regional legislations substitutes for state laws? To what extent are the competences attributed by the European treaties or the Italian Constitution actually exerted in the various policy areas? Is their exertion consistent with the normative recommendations from the economic literature about their optimum allocation among different levels of government? The main results show that, first, there seems to be a certain substitutability between EU and national legislations (even if not a very strong one), but not between regional and national ones. Second, the EU concentrates its legislative activity mainly in international trade and agriculture, whilst social policy is where the regions and the State (which is also the main actor in foreign policy) are more active. Third, at least two levels of government (in some cases all of them) are significantly involved in the legislative activity in many sectors, even where the rationale for that is, at best, very questionable, indicating that they actually share a larger number of policy tasks than that suggested by the economic theory. It appears therefore that an excessive number of competences are actually shared among different levels of government. From an economic perspective, it may well be recommended that some competences be shared, but only when the balance between scale or spillover effects and heterogeneity of preferences suggests so. When, on the contrary, too many levels of government are involved in a certain policy area, the distinction between their different responsibilities easily becomes unnecessarily blurred. This may not only leads to a slower and inefficient policy-making process, but also risks to make it too complicate to understand for citizens, who, on the contrary, should be able to know who is really responsible for a certain policy when they vote in national,local or European elections or in referenda on national or European constitutional issues.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

[EN]One of the main issues of the current education system is the lack of student motivation. This aspect together with the permanent change that the Information and Communications Technologies involve represents a major challenge for the teacher: to continuously update contents and to keep awake the student’s interest. A tremendously useful tool in classrooms consists on the integration of projects with participative and collaborative dynamics, where the teacher acts mainly as a guidance to the student activity instead of being a mere knowledge and evaluation transmitter. As a specific example of project based learning, the EDUROVs project consists on building an economic underwater robot using low cost materials, but allowing the integration and programming of many accessories and sensors with minimum budget using opensource hardware and software.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Neuronal networks exhibit diverse types of plasticity, including the activity-dependent regulation of synaptic functions and refinement of synaptic connections. In addition, continuous generation of new neurons in the “adult” brain (adult neurogenesis) represents a powerful form of structural plasticity establishing new connections and possibly implementing pre-existing neuronal circuits (Kempermann et al, 2000; Ming and Song, 2005). Neurotrophins, a family of neuronal growth factors, are crucially involved in the modulation of activity-dependent neuronal plasticity. The first evidence for the physiological importance of this role evolved from the observations that the local administration of neurotrophins has dramatic effects on the activity-dependent refinement of synaptic connections in the visual cortex (McAllister et al, 1999; Berardi et al, 2000; Thoenen, 1995). Moreover, the local availability of critical amounts of neurotrophins appears to be relevant for the ability of hippocampal neurons to undergo long-term potentiation (LTP) of the synaptic transmission (Lu, 2004; Aicardi et al, 2004). To achieve a comprehensive understanding of the modulatory role of neurotrophins in integrated neuronal systems, informations on the mechanisms about local neurotrophins synthesis and secretion as well as ditribution of their cognate receptors are of crucial importance. In the first part of this doctoral thesis I have used electrophysiological approaches and real-time imaging tecniques to investigate additional features about the regulation of neurotrophins secretion, namely the capability of the neurotrophin brain-derived neurotrophic factor (BDNF) to undergo synaptic recycling. In cortical and hippocampal slices as well as in dissociated cell cultures, neuronal activity rapidly enhances the neuronal expression and secretion of BDNF which is subsequently taken up by neurons themselves but also by perineuronal astrocytes, through the selective activation of BDNF receptors. Moreover, internalized BDNF becomes part of the releasable source of the neurotrophin, which is promptly recruited for activity-dependent recycling. Thus, we described for the first time that neurons and astrocytes contain an endocytic compartment competent for BDNF recycling, suggesting a specialized form of bidirectional communication between neurons and glia. The mechanism of BDNF recycling is reminiscent of that for neurotransmitters and identifies BDNF as a new modulator implicated in neuro- and glio-transmission. In the second part of this doctoral thesis I addressed the role of BDNF signaling in adult hippocampal neurogenesis. I have generated a transgenic mouse model to specifically investigate the influence of BDNF signaling on the generation, differentiation, survival and connectivity of newborn neurons into the adult hippocampal network. I demonstrated that the survival of newborn neurons critically depends on the activation of the BDNF receptor TrkB. The TrkB-dependent decision regarding life or death in these newborn neurons takes place right at the transition point of their morphological and functional maturation Before newborn neurons start to die, they exhibit a drastic reduction in dendritic complexity and spine density compared to wild-type newborn neurons, indicating that this receptor is required for the connectivity of newborn neurons. Both the failure to become integrated and subsequent dying lead to impaired LTP. Finally, mice lacking a functional TrkB in the restricted population of newborn neurons show behavioral deficits, namely increased anxiety-like behavior. These data suggest that the integration and establishment of proper connections by newly generated neurons into the pre-existing network are relevant features for regulating the emotional state of the animal.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Several MCAO systems are under study to improve the angular resolution of the current and of the future generation large ground-based telescopes (diameters in the 8-40 m range). The subject of this PhD Thesis is embedded in this context. Two MCAO systems, in dierent realization phases, are addressed in this Thesis: NIRVANA, the 'double' MCAO system designed for one of the interferometric instruments of LBT, is in the integration and testing phase; MAORY, the future E-ELT MCAO module, is under preliminary study. These two systems takle the sky coverage problem in two dierent ways. The layer oriented approach of NIRVANA, coupled with multi-pyramids wavefront sensors, takes advantage of the optical co-addition of the signal coming from up to 12 NGS in a annular 2' to 6' technical FoV and up to 8 in the central 2' FoV. Summing the light coming from many natural sources permits to increase the limiting magnitude of the single NGS and to improve considerably the sky coverage. One of the two Wavefront Sensors for the mid- high altitude atmosphere analysis has been integrated and tested as a stand- alone unit in the laboratory at INAF-Osservatorio Astronomico di Bologna and afterwards delivered to the MPIA laboratories in Heidelberg, where was integrated and aligned to the post-focal optical relay of one LINC-NIRVANA arm. A number of tests were performed in order to characterize and optimize the system functionalities and performance. A report about this work is presented in Chapter 2. In the MAORY case, to ensure correction uniformity and sky coverage, the LGS-based approach is the current baseline. However, since the Sodium layer is approximately 10 km thick, the articial reference source looks elongated, especially when observed from the edge of a large aperture. On a 30-40 m class telescope, for instance, the maximum elongation varies between few arcsec and 10 arcsec, depending on the actual telescope diameter, on the Sodium layer properties and on the laser launcher position. The centroiding error in a Shack-Hartmann WFS increases proportionally to the elongation (in a photon noise dominated regime), strongly limiting the performance. To compensate for this effect a straightforward solution is to increase the laser power, i.e. to increase the number of detected photons per subaperture. The scope of Chapter 3 is twofold: an analysis of the performance of three dierent algorithms (Weighted Center of Gravity, Correlation and Quad-cell) for the instantaneous LGS image position measurement in presence of elongated spots and the determination of the required number of photons to achieve a certain average wavefront error over the telescope aperture. An alternative optical solution to the spot elongation problem is proposed in Section 3.4. Starting from the considerations presented in Chapter 3, a first order analysis of the LGS WFS for MAORY (number of subapertures, number of detected photons per subaperture, RON, focal plane sampling, subaperture FoV) is the subject of Chapter 4. An LGS WFS laboratory prototype was designed to reproduce the relevant aspects of an LGS SH WFS for the E-ELT and to evaluate the performance of different centroid algorithms in presence of elongated spots as investigated numerically and analytically in Chapter 3. This prototype permits to simulate realistic Sodium proles. A full testing plan for the prototype is set in Chapter 4.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

ZusammenfassungDie Bildung von mittelozeanischen Rückenbasalten (MORB) ist einer der wichtigsten Stoffflüsse der Erde. Jährlich wird entlang der 75.000 km langen mittelozeanischen Rücken mehr als 20 km3 neue magmatische Kruste gebildet, das sind etwa 90 Prozent der globalen Magmenproduktion. Obwohl ozeanische Rücken und MORB zu den am meisten untersuchten geologischen Themenbereichen gehören, existieren weiterhin einige Streit-fragen. Zu den wichtigsten zählt die Rolle von geodynamischen Rahmenbedingungen, wie etwa Divergenzrate oder die Nähe zu Hotspots oder Transformstörungen, sowie der absolute Aufschmelzgrad, oder die Tiefe, in der die Aufschmelzung unter den Rücken beginnt. Diese Dissertation widmet sich diesen Themen auf der Basis von Haupt- und Spurenelementzusammensetzungen in Mineralen ozeanischer Mantelgesteine.Geochemische Charakteristika von MORB deuten darauf hin, dass der ozeanische Mantel im Stabilitätsfeld von Granatperidotit zu schmelzen beginnt. Neuere Experimente zeigen jedoch, dass die schweren Seltenerdelemente (SEE) kompatibel im Klinopyroxen (Cpx) sind. Aufgrund dieser granatähnlichen Eigenschaft von Cpx wird Granat nicht mehr zur Erklärung der MORB Daten benötigt, wodurch sich der Beginn der Aufschmelzung zu geringeren Drucken verschiebt. Aus diesem Grund ist es wichtig zu überprüfen, ob diese Hypothese mit Daten von abyssalen Peridotiten in Einklang zu bringen ist. Diese am Ozeanboden aufgeschlossenen Mantelfragmente stellen die Residuen des Aufschmelz-prozesses dar, und ihr Mineralchemismus enthält Information über die Bildungs-bedingungen der Magmen. Haupt- und Spurenelementzusammensetzungen von Peridotit-proben des Zentralindischen Rückens (CIR) wurden mit Mikrosonde und Ionensonde bestimmt, und mit veröffentlichten Daten verglichen. Cpx der CIR Peridotite weisen niedrige Verhältnisse von mittleren zu schweren SEE und hohe absolute Konzentrationen der schweren SEE auf. Aufschmelzmodelle eines Spinellperidotits unter Anwendung von üblichen, inkompatiblen Verteilungskoeffizienten (Kd's) können die gemessenen Fraktionierungen von mittleren zu schweren SEE nicht reproduzieren. Die Anwendung der neuen Kd's, die kompatibles Verhalten der schweren SEE im Cpx vorhersagen, ergibt zwar bessere Resultate, kann jedoch nicht die am stärksten fraktionierten Proben erklären. Darüber hinaus werden sehr hohe Aufschmelzgrade benötigt, was nicht mit Hauptelementdaten in Einklang zu bringen ist. Niedrige (~3-5%) Aufschmelzgrade im Stabilitätsfeld von Granatperidotit, gefolgt von weiterer Aufschmelzung von Spinellperidotit kann jedoch die Beobachtungen weitgehend erklären. Aus diesem Grund muss Granat weiterhin als wichtige Phase bei der Genese von MORB betrachtet werden (Kapitel 1).Eine weitere Hürde zum quantitativen Verständnis von Aufschmelzprozessen unter mittelozeanischen Rücken ist die fehlende Korrelation zwischen Haupt- und Spuren-elementen in residuellen abyssalen Peridotiten. Das Cr/(Cr+Al) Verhältnis (Cr#) in Spinell wird im Allgemeinen als guter qualitativer Indikator für den Aufschmelzgrad betrachtet. Die Mineralchemie der CIR Peridotite und publizierte Daten von anderen abyssalen Peridotiten zeigen, dass die schweren SEE sehr gut (r2 ~ 0.9) mit Cr# der koexistierenden Spinelle korreliert. Die Auswertung dieser Korrelation ergibt einen quantitativen Aufschmelz-indikator für Residuen, welcher auf dem Spinellchemismus basiert. Damit kann der Schmelzgrad als Funktion von Cr# in Spinell ausgedrückt werden: F = 0.10×ln(Cr#) + 0.24 (Hellebrand et al., Nature, in review; Kapitel 2). Die Anwendung dieses Indikators auf Mantelproben, für die keine Ionensondendaten verfügbar sind, ermöglicht es, geochemische und geophysikalischen Daten zu verbinden. Aus geodynamischer Perspektive ist der Gakkel Rücken im Arktischen Ozean von großer Bedeutung für das Verständnis von Aufschmelzprozessen, da er weltweit die niedrigste Divergenzrate aufweist und große Transformstörungen fehlen. Publizierte Basaltdaten deuten auf einen extrem niedrigen Aufschmelzgrad hin, was mit globalen Korrelationen im Einklang steht. Stark alterierte Mantelperidotite einer Lokalität entlang des kaum beprobten Gakkel Rückens wurden deshalb auf Primärminerale untersucht. Nur in einer Probe sind oxidierte Spinellpseudomorphosen mit Spuren primärer Spinelle erhalten geblieben. Ihre Cr# ist signifikant höher als die einiger Peridotite von schneller divergierenden Rücken und ihr Schmelzgrad ist damit höher als aufgrund der Basaltzusammensetzungen vermutet. Der unter Anwendung des oben erwähnten Indikators ermittelte Schmelzgrad ermöglicht die Berechnung der Krustenmächtigkeit am Gakkel Rücken. Diese ist wesentlich größer als die aus Schweredaten ermittelte Mächtigkeit, oder die aus der globalen Korrelation zwischen Divergenzrate und mittels Seismik erhaltene Krustendicke. Dieses unerwartete Ergebnis kann möglicherweise auf kompositionelle Heterogenitäten bei niedrigen Schmelzgraden, oder auf eine insgesamt größere Verarmung des Mantels unter dem Gakkel Rücken zurückgeführt werden (Hellebrand et al., Chem.Geol., in review; Kapitel 3).Zusätzliche Informationen zur Modellierung und Analytik sind im Anhang A-C aufgeführt

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The present work is devoted to the assessment of the energy fluxes physics in the space of scales and physical space of wall-turbulent flows. The generalized Kolmogorov equation will be applied to DNS data of a turbulent channel flow in order to describe the energy fluxes paths from production to dissipation in the augmented space of wall-turbulent flows. This multidimensional description will be shown to be crucial to understand the formation and sustainment of the turbulent fluctuations fed by the energy fluxes coming from the near-wall production region. An unexpected behavior of the energy fluxes comes out from this analysis consisting of spiral-like paths in the combined physical/scale space where the controversial reverse energy cascade plays a central role. The observed behavior conflicts with the classical notion of the Richardson/Kolmogorov energy cascade and may have strong repercussions on both theoretical and modeling approaches to wall-turbulence. To this aim a new relation stating the leading physical processes governing the energy transfer in wall-turbulence is suggested and shown able to capture most of the rich dynamics of the shear dominated region of the flow. Two dynamical processes are identified as driving mechanisms for the fluxes, one in the near wall region and a second one further away from the wall. The former, stronger one is related to the dynamics involved in the near-wall turbulence regeneration cycle. The second suggests an outer self-sustaining mechanism which is asymptotically expected to take place in the log-layer and could explain the debated mixed inner/outer scaling of the near-wall statistics. The same approach is applied for the first time to a filtered velocity field. A generalized Kolmogorov equation specialized for filtered velocity field is derived and discussed. The results will show what effects the subgrid scales have on the resolved motion in both physical and scale space, singling out the prominent role of the filter length compared to the cross-over scale between production dominated scales and inertial range, lc, and the reverse energy cascade region lb. The systematic characterization of the resolved and subgrid physics as function of the filter scale and of the wall-distance will be shown instrumental for a correct use of LES models in the simulation of wall turbulent flows. Taking inspiration from the new relation for the energy transfer in wall turbulence, a new class of LES models will be also proposed. Finally, the generalized Kolmogorov equation specialized for filtered velocity fields will be shown to be an helpful statistical tool for the assessment of LES models and for the development of new ones. As example, some classical purely dissipative eddy viscosity models are analyzed via an a priori procedure.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Reliable electronic systems, namely a set of reliable electronic devices connected to each other and working correctly together for the same functionality, represent an essential ingredient for the large-scale commercial implementation of any technological advancement. Microelectronics technologies and new powerful integrated circuits provide noticeable improvements in performance and cost-effectiveness, and allow introducing electronic systems in increasingly diversified contexts. On the other hand, opening of new fields of application leads to new, unexplored reliability issues. The development of semiconductor device and electrical models (such as the well known SPICE models) able to describe the electrical behavior of devices and circuits, is a useful means to simulate and analyze the functionality of new electronic architectures and new technologies. Moreover, it represents an effective way to point out the reliability issues due to the employment of advanced electronic systems in new application contexts. In this thesis modeling and design of both advanced reliable circuits for general-purpose applications and devices for energy efficiency are considered. More in details, the following activities have been carried out: first, reliability issues in terms of security of standard communication protocols in wireless sensor networks are discussed. A new communication protocol is introduced, allows increasing the network security. Second, a novel scheme for the on-die measurement of either clock jitter or process parameter variations is proposed. The developed scheme can be used for an evaluation of both jitter and process parameter variations at low costs. Then, reliability issues in the field of “energy scavenging systems” have been analyzed. An accurate analysis and modeling of the effects of faults affecting circuit for energy harvesting from mechanical vibrations is performed. Finally, the problem of modeling the electrical and thermal behavior of photovoltaic (PV) cells under hot-spot condition is addressed with the development of an electrical and thermal model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Several countries have acquired, over the past decades, large amounts of area covering Airborne Electromagnetic data. Contribution of airborne geophysics has dramatically increased for both groundwater resource mapping and management proving how those systems are appropriate for large-scale and efficient groundwater surveying. We start with processing and inversion of two AEM dataset from two different systems collected over the Spiritwood Valley Aquifer area, Manitoba, Canada respectively, the AeroTEM III (commissioned by the Geological Survey of Canada in 2010) and the “Full waveform VTEM” dataset, collected and tested over the same survey area, during the fall 2011. We demonstrate that in the presence of multiple datasets, either AEM and ground data, due processing, inversion, post-processing, data integration and data calibration is the proper approach capable of providing reliable and consistent resistivity models. Our approach can be of interest to many end users, ranging from Geological Surveys, Universities to Private Companies, which are often proprietary of large geophysical databases to be interpreted for geological and\or hydrogeological purposes. In this study we deeply investigate the role of integration of several complimentary types of geophysical data collected over the same survey area. We show that data integration can improve inversions, reduce ambiguity and deliver high resolution results. We further attempt to use the final, most reliable output resistivity models as a solid basis for building a knowledge-driven 3D geological voxel-based model. A voxel approach allows a quantitative understanding of the hydrogeological setting of the area, and it can be further used to estimate the aquifers volumes (i.e. potential amount of groundwater resources) as well as hydrogeological flow model prediction. In addition, we investigated the impact of an AEM dataset towards hydrogeological mapping and 3D hydrogeological modeling, comparing it to having only a ground based TEM dataset and\or to having only boreholes data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Multifunctional Structures (MFS) represent one of the most promising disruptive technologies in the space industry. The possibility to merge spacecraft primary and secondary structures as well as attitude control, power management and onboard computing functions is expected to allow for mass, volume and integration effort savings. Additionally, this will bring the modular construction of spacecraft to a whole new level, by making the development and integration of spacecraft modules, or building blocks, leaner, reducing lead times from commissioning to launch from the current 3-6 years down to the order of 10 months, as foreseen by the latest Operationally Responsive Space (ORS) initiatives. Several basic functionalities have been integrated and tested in specimens of various natures over the last two decades. However, a more integrated, system-level approach was yet to be developed. The activity reported in this thesis was focused on the system-level approach to multifunctional structures for spacecraft, namely in the context of nano- and micro-satellites. This thesis documents the work undertaken in the context of the MFS program promoted by the European Space Agency under the Technology Readiness Program (TRP): a feasibility study, including specimens manufacturing and testing. The work sequence covered a state of the art review, with particular attention to traditional modular architectures implemented in ALMASat-1 and ALMASat-EO satellites, and requirements definition, followed by the development of a modular multi-purpose nano-spacecraft concept, and finally by the design, integration and testing of integrated MFS specimens. The approach for the integration of several critical functionalities into nano-spacecraft modules was validated and the overall performance of the system was verified through relevant functional and environmental testing at University of Bologna and University of Southampton laboratories.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Analyzing and modeling relationships between the structure of chemical compounds, their physico-chemical properties, and biological or toxic effects in chemical datasets is a challenging task for scientific researchers in the field of cheminformatics. Therefore, (Q)SAR model validation is essential to ensure future model predictivity on unseen compounds. Proper validation is also one of the requirements of regulatory authorities in order to approve its use in real-world scenarios as an alternative testing method. However, at the same time, the question of how to validate a (Q)SAR model is still under discussion. In this work, we empirically compare a k-fold cross-validation with external test set validation. The introduced workflow allows to apply the built and validated models to large amounts of unseen data, and to compare the performance of the different validation approaches. Our experimental results indicate that cross-validation produces (Q)SAR models with higher predictivity than external test set validation and reduces the variance of the results. Statistical validation is important to evaluate the performance of (Q)SAR models, but does not support the user in better understanding the properties of the model or the underlying correlations. We present the 3D molecular viewer CheS-Mapper (Chemical Space Mapper) that arranges compounds in 3D space, such that their spatial proximity reflects their similarity. The user can indirectly determine similarity, by selecting which features to employ in the process. The tool can use and calculate different kinds of features, like structural fragments as well as quantitative chemical descriptors. Comprehensive functionalities including clustering, alignment of compounds according to their 3D structure, and feature highlighting aid the chemist to better understand patterns and regularities and relate the observations to established scientific knowledge. Even though visualization tools for analyzing (Q)SAR information in small molecule datasets exist, integrated visualization methods that allows for the investigation of model validation results are still lacking. We propose visual validation, as an approach for the graphical inspection of (Q)SAR model validation results. New functionalities in CheS-Mapper 2.0 facilitate the analysis of (Q)SAR information and allow the visual validation of (Q)SAR models. The tool enables the comparison of model predictions to the actual activity in feature space. Our approach reveals if the endpoint is modeled too specific or too generic and highlights common properties of misclassified compounds. Moreover, the researcher can use CheS-Mapper to inspect how the (Q)SAR model predicts activity cliffs. The CheS-Mapper software is freely available at http://ches-mapper.org.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The precise timing of events in the brain has consequences for intracellular processes, synaptic plasticity, integration and network behaviour. Pyramidal neurons, the most widespread excitatory neuron of the neocortex have multiple spike initiation zones, which interact via dendritic and somatic spikes actively propagating in all directions within the dendritic tree. For these neurons, therefore, both the location and timing of synaptic inputs are critical. The time window for which the backpropagating action potential can influence dendritic spike generation has been extensively studied in layer 5 neocortical pyramidal neurons of rat somatosensory cortex. Here, we re-examine this coincidence detection window for pyramidal cell types across the rat somatosensory cortex in layers 2/3, 5 and 6. We find that the time-window for optimal interaction is widest and shifted in layer 5 pyramidal neurons relative to cells in layers 6 and 2/3. Inputs arriving at the same time and locations will therefore differentially affect spike-timing dependent processes in the different classes of pyramidal neurons.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Studies using cultured cells allow one to dissect complex cellular mechanisms in greater detail than when studying living organisms alone. However, before cultured cells can deliver meaningful results they must accurately represent the in vivo situation. Over the last three to four decades considerable effort has been devoted to the development of culture media which improve in vitro growth and modeling accuracy. In contrast to earlier large-scale, non-specific screening of factors, in recent years the development of such media has relied increasingly on a deeper understanding of the cell's biology and the selection of growth factors to specifically activate known biological processes. These new media now enable equal or better cell isolation and growth, using significantly simpler and less labor-intensive methodologies. Here we describe a simple method to isolate and cultivate epidermal keratinocytes from embryonic or neonatal skin on uncoated plastic using a medium specifically designed to retain epidermal keratinocyte progenitors in an undifferentiated state for improved isolation and proliferation and an alternative medium to support terminal differentiation.