10 resultados para Univalent Functions with Negative Coefficients

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this Thesis we consider a class of second order partial differential operators with non-negative characteristic form and with smooth coefficients. Main assumptions on the relevant operators are hypoellipticity and existence of a well-behaved global fundamental solution. We first make a deep analysis of the L-Green function for arbitrary open sets and of its applications to the Representation Theorems of Riesz-type for L-subharmonic and L-superharmonic functions. Then, we prove an Inverse Mean value Theorem characterizing the superlevel sets of the fundamental solution by means of L-harmonic functions. Furthermore, we establish a Lebesgue-type result showing the role of the mean-integal operator in solving the homogeneus Dirichlet problem related to L in the Perron-Wiener sense. Finally, we compare Perron-Wiener and weak variational solutions of the homogeneous Dirichlet problem, under specific hypothesis on the boundary datum.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main aim of this Ph.D. dissertation is the study of clustering dependent data by means of copula functions with particular emphasis on microarray data. Copula functions are a popular multivariate modeling tool in each field where the multivariate dependence is of great interest and their use in clustering has not been still investigated. The first part of this work contains the review of the literature of clustering methods, copula functions and microarray experiments. The attention focuses on the K–means (Hartigan, 1975; Hartigan and Wong, 1979), the hierarchical (Everitt, 1974) and the model–based (Fraley and Raftery, 1998, 1999, 2000, 2007) clustering techniques because their performance is compared. Then, the probabilistic interpretation of the Sklar’s theorem (Sklar’s, 1959), the estimation methods for copulas like the Inference for Margins (Joe and Xu, 1996) and the Archimedean and Elliptical copula families are presented. In the end, applications of clustering methods and copulas to the genetic and microarray experiments are highlighted. The second part contains the original contribution proposed. A simulation study is performed in order to evaluate the performance of the K–means and the hierarchical bottom–up clustering methods in identifying clusters according to the dependence structure of the data generating process. Different simulations are performed by varying different conditions (e.g., the kind of margins (distinct, overlapping and nested) and the value of the dependence parameter ) and the results are evaluated by means of different measures of performance. In light of the simulation results and of the limits of the two investigated clustering methods, a new clustering algorithm based on copula functions (‘CoClust’ in brief) is proposed. The basic idea, the iterative procedure of the CoClust and the description of the written R functions with their output are given. The CoClust algorithm is tested on simulated data (by varying the number of clusters, the copula models, the dependence parameter value and the degree of overlap of margins) and is compared with the performance of model–based clustering by using different measures of performance, like the percentage of well–identified number of clusters and the not rejection percentage of H0 on . It is shown that the CoClust algorithm allows to overcome all observed limits of the other investigated clustering techniques and is able to identify clusters according to the dependence structure of the data independently of the degree of overlap of margins and the strength of the dependence. The CoClust uses a criterion based on the maximized log–likelihood function of the copula and can virtually account for any possible dependence relationship between observations. Many peculiar characteristics are shown for the CoClust, e.g. its capability of identifying the true number of clusters and the fact that it does not require a starting classification. Finally, the CoClust algorithm is applied to the real microarray data of Hedenfalk et al. (2001) both to the gene expressions observed in three different cancer samples and to the columns (tumor samples) of the whole data matrix.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Clusters have increasingly become an essential part of policy discourses at all levels, EU, national, regional, dealing with regional development, competitiveness, innovation, entrepreneurship, SMEs. These impressive efforts in promoting the concept of clusters on the policy-making arena have been accompanied by much less academic and scientific research work investigating the actual economic performance of firms in clusters, the design and execution of cluster policies and going beyond singular case studies to a more methodologically integrated and comparative approach to the study of clusters and their real-world impact. The theoretical background is far from being consolidated and there is a variety of methodologies and approaches for studying and interpreting this phenomenon while at the same time little comparability among studies on actual cluster performances. The conceptual framework of clustering suggests that they affect performance but theory makes little prediction as to the ultimate distribution of the value being created by clusters. This thesis takes the case of Eastern European countries for two reasons. One is that clusters, as coopetitive environments, are a new phenomenon as the previous centrally-based system did not allow for such types of firm organizations. The other is that, as new EU member states, they have been subject to the increased popularization of the cluster policy approach by the European Commission, especially in the framework of the National Reform Programmes related to the Lisbon objectives. The originality of the work lays in the fact that starting from an overview of theoretical contributions on clustering, it offers a comparative empirical study of clusters in transition countries. There have been very few examples in the literature that attempt to examine cluster performance in a comparative cross-country perspective. It adds to this an analysis of cluster policies and their implementation or lack of such as a way to analyse the way the cluster concept has been introduced to transition economies. Our findings show that the implementation of cluster policies does vary across countries with some countries which have embraced it more than others. The specific modes of implementation, however, are very similar, based mostly on soft measures such as funding for cluster initiatives, usually directed towards the creation of cluster management structures or cluster facilitators. They are essentially founded on a common assumption that the added values of clusters is in the creation of linkages among firms, human capital, skills and knowledge at the local level, most often perceived as the regional level. Often times geographical proximity is not a necessary element in the application process and cluster application are very similar to network membership. Cluster mapping is rarely a factor in the selection of cluster initiatives for funding and the relative question about critical mass and expected outcomes is not considered. In fact, monitoring and evaluation are not elements of the cluster policy cycle which have received a lot of attention. Bulgaria and the Czech Republic are the countries which have implemented cluster policies most decisively, Hungary and Poland have made significant efforts, while Slovakia and Romania have only sporadically and not systematically used cluster initiatives. When examining whether, in fact, firms located within regional clusters perform better and are more efficient than similar firms outside clusters, we do find positive results across countries and across sectors. The only country with negative impact from being located in a cluster is the Czech Republic.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data coming out from various researches carried out over the last years in Italy on the problem of school dispersion in secondary school show that difficulty in studying mathematics is one of the most frequent reasons of discomfort reported by students. Nevertheless, it is definitely unrealistic to think we can do without such knowledge in today society: mathematics is largely taught in secondary school and it is not confined within technical-scientific courses only. It is reasonable to say that, although students may choose academic courses that are, apparently, far away from mathematics, all students will have to come to terms, sooner or later in their life, with this subject. Among the reasons of discomfort given by the study of mathematics, some mention the very nature of this subject and in particular the complex symbolic language through which it is expressed. In fact, mathematics is a multimodal system composed by oral and written verbal texts, symbol expressions, such as formulae and equations, figures and graphs. For this, the study of mathematics represents a real challenge to those who suffer from dyslexia: this is a constitutional condition limiting people performances in relation to the activities of reading and writing and, in particular, to the study of mathematical contents. Here the difficulties in working with verbal and symbolic codes entail, in turn, difficulties in the comprehension of texts from which to deduce operations that, once combined together, would lead to the problem final solution. Information technologies may support this learning disorder effectively. However, these tools have some implementation limits, restricting their use in the study of scientific subjects. Vocal synthesis word processors are currently used to compensate difficulties in reading within the area of classical studies, but they are not used within the area of mathematics. This is because the vocal synthesis (or we should say the screen reader supporting it) is not able to interpret all that is not textual, such as symbols, images and graphs. The DISMATH software, which is the subject of this project, would allow dyslexic users to read technical-scientific documents with the help of a vocal synthesis, to understand the spatial structure of formulae and matrixes, to write documents with a technical-scientific content in a format that is compatible with main scientific editors. The system uses LaTex, a text mathematic language, as mediation system. It is set up as LaTex editor, whose graphic interface, in line with main commercial products, offers some additional specific functions with the capability to support the needs of users who are not able to manage verbal and symbolic codes on their own. LaTex is translated in real time into a standard symbolic language and it is read by vocal synthesis in natural language, in order to increase, through the bimodal representation, the ability to process information. The understanding of the mathematic formula through its reading is made possible by the deconstruction of the formula itself and its “tree” representation, so allowing to identify the logical elements composing it. Users, even without knowing LaTex language, are able to write whatever scientific document they need: in fact the symbolic elements are recalled by proper menus and automatically translated by the software managing the correct syntax. The final aim of the project, therefore, is to implement an editor enabling dyslexic people (but not only them) to manage mathematic formulae effectively, through the integration of different software tools, so allowing a better teacher/learner interaction too.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The topic of my Ph.D. thesis is the finite element modeling of coseismic deformation imaged by DInSAR and GPS data. I developed a method to calculate synthetic Green functions with finite element models (FEMs) and then use linear inversion methods to determine the slip distribution on the fault plane. The method is applied to the 2009 L’Aquila Earthquake (Italy) and to the 2008 Wenchuan earthquake (China). I focus on the influence of rheological features of the earth's crust by implementing seismic tomographic data and the influence of topography by implementing Digital Elevation Models (DEM) layers on the FEMs. Results for the L’Aquila earthquake highlight the non-negligible influence of the medium structure: homogeneous and heterogeneous models show discrepancies up to 20% in the fault slip distribution values. Furthermore, in the heterogeneous models a new area of slip appears above the hypocenter. Regarding the 2008 Wenchuan earthquake, the very steep topographic relief of Longmen Shan Range is implemented in my FE model. A large number of DEM layers corresponding to East China is used to achieve the complete coverage of the FE model. My objective was to explore the influence of the topography on the retrieved coseismic slip distribution. The inversion results reveals significant differences between the flat and topographic model. Thus, the flat models frequently adopted are inappropriate to represent the earth surface topographic features and especially in the case of the 2008 Wenchuan earthquake.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Compared to other, plastic materials have registered a strong acceleration in production and consumption during the last years. Despite the existence of waste management systems, plastic_based materials are still a pervasive presence in the environment, with negative consequences on marine ecosystem and human health. The recycling is still challenging due to the growing complexity of product design, the so-called overpackaging, the insufficient and inadequate recycling infrastructure, the weak market of recycled plastics and the high cost of waste treatment and disposal. The Circular economy package, the European Strategy for plastics in a circular economy and the recent European Green Deal include very ambitious programmes to rethink the entire plastic value chain. As regards packaging, all plastic packaging will have to be 100% recyclable (or reusable) and 55% recycled by 2030. Regions are consequently called upon to set up a robust plan able to fit the European objectives. It takes on greater importance in Emilia Romagna where the Packaging valley is located. This thesis supports the definition of a strategy aimed to establish an after-use plastics economy in the region. The PhD work has set the basis and the instruments to establish the so-called Circularity Strategy with the aim to turn about 92.000t of plastic waste into profitable secondary resources. System innovation, life cycle thinking and participative backcasting method have allowed to deeply analyse the current system, orientate the problem and explore sustainable solutions through a broad stakeholder participation. A material flow analysis, accompanied by a barrier analysis, has supported the identification of the gaps between the present situation and the 2030 scenario. Eco-design for and from recycling (and a mass _based recycling rate (based on the effective amount of plastic wastes turned into secondary plastics), valorized by a value_based indicator, are the key-points of the action plan.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays the production of increasingly complex and electrified vehicles requires the implementation of new control and monitoring systems. This reason, together with the tendency of moving rapidly from the test bench to the vehicle, leads to a landscape that requires the development of embedded hardware and software to face the application effectively and efficiently. The development of application-based software on real-time/FPGA hardware could be a good answer for these challenges: FPGA grants parallel low-level and high-speed calculation/timing, while the Real-Time processor can handle high-level calculation layers, logging and communication functions with determinism. Thanks to the software flexibility and small dimensions, these architectures can find a perfect collocation as engine RCP (Rapid Control Prototyping) units and as smart data logger/analyser, both for test bench and on vehicle application. Efforts have been done for building a base architecture with common functionalities capable of easily hosting application-specific control code. Several case studies originating in this scenario will be shown; dedicated solutions for protype applications have been developed exploiting a real-time/FPGA architecture as ECU (Engine Control Unit) and custom RCP functionalities, such as water injection and testing hydraulic brake control.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In solid rocket motors, the absence of combustion controllability and the large amount of financial resources involved in full-scale firing tests, increase the importance of numerical simulations in order to asses stringent mission thrust requirements and evaluate the influence of thrust chamber phenomena affecting the grain combustion. Among those phenomena, grain local defects (propellant casting inclusions and debondings), combustion heat accumulation involving pressure peaks (Friedman Curl effect), and case-insulating thermal protection material ablation affect thrust prediction in terms of not negligible deviations with respect to the nominal expected trace. Most of the recent models have proposed a simplified treatment to the problem using empirical corrective functions, with the disadvantages of not fully understanding the physical dynamics and thus of not obtaining predictive results for different configurations of solid rocket motors in a boundary conditions-varied scenario. This work is aimed to introduce different mathematical approaches to model, analyze, and predict the abovementioned phenomena, presenting a detailed physical interpretation based on existing SRMs configurations. Internal ballistics predictions are obtained with an in-house simulation software, where the adoption of a dynamic three-dimensional triangular mesh together with advanced computer graphics methods, allows the previous target to be reached. Numerical procedures are explained in detail. Simulation results are carried out and discussed based on experimental data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although errors might foster learning, they can also be perceived as something to avoid if they are associated with negative consequences (e.g., receiving a bad grade or being mocked by classmates). Such adverse perceptions may trigger negative emotions and error-avoidance attitudes, limiting the possibility to use errors for learning. These students’ reactions may be influenced by relational and cultural aspects of errors that characterise the learning environment. Accordingly, the main aim of this research was to investigate whether relational and cultural characteristics associated with errors affect psychological mechanisms triggered by making mistakes. In the theoretical part, we described the role of errors in learning using an integrated multilevel (i.e., psychological, relational, and cultural levels of analysis) approach. Then, we presented three studies that analysed how cultural and relational error-related variables affect psychological aspects. The studies adopted a specific empirical methodology (i.e., qualitative, experimental, and correlational) and investigated different samples (i.e., teachers, primary school pupils and middle school students). Findings of study one (cultural level) highlighted errors acquire different meanings that are associated with different teachers’ error-handling strategies (e.g., supporting or penalising errors). Study two (relational level) demonstrated that teachers’ supportive error-handling strategies promote students’ perceptions of being in a positive error climate. Findings of study three (relational and psychological level) showed that positive error climate foster students’ adaptive reactions towards errors and learning outcomes. Overall, our findings indicated that different variables influence students’ learning from errors process and teachers play an important role in conveying specific meanings of errors during learning activities, dealing with students’ mistakes supportively, and establishing an error-friendly classroom environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study carried out in this thesis is devoted to spectral analysis of systems of PDEs related also with quantum physics models. Namely, the research deals with classes of systems that contain certain quantum optics models such as Jaynes-Cummings, Rabi and their generalizations that describe light-matter interaction. First we investigate the spectral Weyl asymptotics for a class of semiregular systems, extending to the vector-valued case results of Helffer and Robert, and more recently of Doll, Gannot and Wunsch. Actually, the asymptotics by Doll, Gannot and Wunsch is more precise (that is why we call it refined) than the classical result by Helffer and Robert, but deals with a less general class of systems, since the authors make an hypothesis on the measure of the subset of the unit sphere on which the tangential derivatives of the X-Ray transform of the semiprincipal symbol vanish to infinity order. Abstract Next, we give a meromorphic continuation of the spectral zeta function for semiregular differential systems with polynomial coefficients, generalizing the results by Ichinose and Wakayama and Parmeggiani. Finally, we state and prove a quasi-clustering result for a class of systems including the aforementioned quantum optics models and we conclude the thesis by showing a Weyl law result for the Rabi model and its generalizations.