872 resultados para Developed model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present thesis investigates the issue of work-family conflict and facilitation in a sanitarian contest, using the DISC Model (De Jonge and Dormann, 2003, 2006). The general aim has been declined in two empirical studies reported in this dissertation chapters. Chapter 1 reporting the psychometric properties of the Demand-Induced Strain Compensation Questionnaire. Although the empirical evidence on the DISC Model has received a fair amount of attention in literature both for the theoretical principles and for the instrument developed to display them (DISQ; De Jonge, Dormann, Van Vegchel, Von Nordheim, Dollard, Cotton and Van den Tooren, 2007) there are no studies based solely on psychometric investigation of the instrument. In addition, no previous studies have ever used the DISC as a model or measurement instrument in an Italian context. Thus the first chapter of the present dissertation was based on psychometric investigation of the DISQ. Chapter 2 reporting a longitudinal study contribution. The purpose was to examine, using the DISC model, the relationship between emotional job characteristics, work-family interface and emotional exhaustion among a health care population. We started testing the Triple Match Principle of the DISC Model using solely the emotional dimension of the strain-stress process (i.e. emotional demands, emotional resources and emotional exhaustion). Then we investigated the mediator role played by w-f conflict and w-f facilitation in relation to emotional job characteristics and emotional exhaustion. Finally we compared the mediator model across workers involved in chronic illness home demands and workers who are not involved. Finally, a general conclusion, integrated and discussed the main findings of the studies reported in this dissertation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ein neu entwickeltes globales Atmosphärenchemie- und Zirkulationsmodell (ECHAM5/MESSy1) wurde verwendet um die Chemie und den Transport von Ozonvorläufersubstanzen zu untersuchen, mit dem Schwerpunkt auf Nichtmethankohlenwasserstoffen. Zu diesem Zweck wurde das Modell durch den Vergleich der Ergebnisse mit Messungen verschiedenen Ursprungs umfangreich evaluiert. Die Analyse zeigt, daß das Modell die Verteilung von Ozon realistisch vorhersagt, und zwar sowohl die Menge als auch den Jahresgang. An der Tropopause gibt das Modell den Austausch zwischen Stratosphäre und Troposphäre ohne vorgeschriebene Flüsse oder Konzentrationen richtig wieder. Das Modell simuliert die Ozonvorläufersubstanzen mit verschiedener Qualität im Vergleich zu den Messungen. Obwohl die Alkane vom Modell gut wiedergeben werden, ergibt sich einige Abweichungen für die Alkene. Von den oxidierten Substanzen wird Formaldehyd (HCHO) richtig wiedergegeben, während die Korrelationen zwischen Beobachtungen und Modellergebnissen für Methanol (CH3OH) und Aceton (CH3COCH3) weitaus schlechter ausfallen. Um die Qualität des Modells im Bezug auf oxidierte Substanzen zu verbessern, wurden einige Sensitivitätsstudien durchgeführt. Diese Substanzen werden durch Emissionen/Deposition von/in den Ozean beeinflußt, und die Kenntnis über den Gasaustausch mit dem Ozean ist mit großen Unsicherheiten behaftet. Um die Ergebnisse des Modells ECHAM5/MESSy1 zu verbessern wurde das neue Submodell AIRSEA entwickelt und in die MESSy-Struktur integriert. Dieses Submodell berücksichtigt den Gasaustausch zwischen Ozean und Atmosphäre einschließlich der oxidierten Substanzen. AIRSEA, welches Informationen über die Flüssigphasenkonzentration des Gases im Oberflächenwasser des Ozeans benötigt wurde ausgiebig getestet. Die Anwendung des neuen Submodells verbessert geringfügig die Modellergebnisse für Aceton und Methanol, obwohl die Verwendung einer vorgeschriebenen Flüssigphasenkonzentration stark den Erfolg der Methode einschränkt, da Meßergebnisse nicht in ausreichendem Maße zu Verfügung stehen. Diese Arbeit vermittelt neue Einsichten über organische Substanzen. Sie stellt die Wichtigkeit der Kopplung zwischen Ozean und Atmosphäre für die Budgets vieler Gase heraus.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the development of quantum mechanics it has been natural to analyze the connection between classical and quantum mechanical descriptions of physical systems. In particular one should expect that in some sense when quantum mechanical effects becomes negligible the system will behave like it is dictated by classical mechanics. One famous relation between classical and quantum theory is due to Ehrenfest. This result was later developed and put on firm mathematical foundations by Hepp. He proved that matrix elements of bounded functions of quantum observables between suitable coherents states (that depend on Planck's constant h) converge to classical values evolving according to the expected classical equations when h goes to zero. His results were later generalized by Ginibre and Velo to bosonic systems with infinite degrees of freedom and scattering theory. In this thesis we study the classical limit of Nelson model, that describes non relativistic particles, whose evolution is dictated by Schrödinger equation, interacting with a scalar relativistic field, whose evolution is dictated by Klein-Gordon equation, by means of a Yukawa-type potential. The classical limit is a mean field and weak coupling limit. We proved that the transition amplitude of a creation or annihilation operator, between suitable coherent states, converges in the classical limit to the solution of the system of differential equations that describes the classical evolution of the theory. The quantum evolution operator converges to the evolution operator of fluctuations around the classical solution. Transition amplitudes of normal ordered products of creation and annihilation operators between coherent states converge to suitable products of the classical solutions. Transition amplitudes of normal ordered products of creation and annihilation operators between fixed particle states converge to an average of products of classical solutions, corresponding to different initial conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

MultiProcessor Systems-on-Chip (MPSoC) are the core of nowadays and next generation computing platforms. Their relevance in the global market continuously increase, occupying an important role both in everydaylife products (e.g. smartphones, tablets, laptops, cars) and in strategical market sectors as aviation, defense, robotics, medicine. Despite of the incredible performance improvements in the recent years processors manufacturers have had to deal with issues, commonly called “Walls”, that have hindered the processors development. After the famous “Power Wall”, that limited the maximum frequency of a single core and marked the birth of the modern multiprocessors system-on-chip, the “Thermal Wall” and the “Utilization Wall” are the actual key limiter for performance improvements. The former concerns the damaging effects of the high temperature on the chip caused by the large power densities dissipation, whereas the second refers to the impossibility of fully exploiting the computing power of the processor due to the limitations on power and temperature budgets. In this thesis we faced these challenges by developing efficient and reliable solutions able to maximize performance while limiting the maximum temperature below a fixed critical threshold and saving energy. This has been possible by exploiting the Model Predictive Controller (MPC) paradigm that solves an optimization problem subject to constraints in order to find the optimal control decisions for the future interval. A fully-distributedMPC-based thermal controller with a far lower complexity respect to a centralized one has been developed. The control feasibility and interesting properties for the simplification of the control design has been proved by studying a partial differential equation thermal model. Finally, the controller has been efficiently included in more complex control schemes able to minimize energy consumption and deal with mixed-criticalities tasks

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of a multibody model of a motorbike engine cranktrain is presented in this work, with an emphasis on flexible component model reduction. A modelling methodology based upon the adoption of non-ideal joints at interface locations, and the inclusion of component flexibility, is developed: both are necessary tasks if one wants to capture dynamic effects which arise in lightweight, high-speed applications. With regard to the first topic, both a ball bearing model and a journal bearing model are implemented, in order to properly capture the dynamic effects of the main connections in the system: angular contact ball bearings are modelled according to a five-DOF nonlinear scheme in order to grasp the crankshaft main bearings behaviour, while an impedance-based hydrodynamic bearing model is implemented providing an enhanced operation prediction at the conrod big end locations. Concerning the second matter, flexible models of the crankshaft and the connecting rod are produced. The well-established Craig-Bampton reduction technique is adopted as a general framework to obtain reduced model representations which are suitable for the subsequent multibody analyses. A particular component mode selection procedure is implemented, based on the concept of Effective Interface Mass, allowing an assessment of the accuracy of the reduced models prior to the nonlinear simulation phase. In addition, a procedure to alleviate the effects of modal truncation, based on the Modal Truncation Augmentation approach, is developed. In order to assess the performances of the proposed modal reduction schemes, numerical tests are performed onto the crankshaft and the conrod models in both frequency and modal domains. A multibody model of the cranktrain is eventually assembled and simulated using a commercial software. Numerical results are presented, demonstrating the effectiveness of the implemented flexible model reduction techniques. The advantages over the conventional frequency-based truncation approach are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents a comprehensive methodology for the reduction of analytical or numerical stochastic models characterized by uncertain input parameters or boundary conditions. The technique, based on the Polynomial Chaos Expansion (PCE) theory, represents a versatile solution to solve direct or inverse problems related to propagation of uncertainty. The potentiality of the methodology is assessed investigating different applicative contexts related to groundwater flow and transport scenarios, such as global sensitivity analysis, risk analysis and model calibration. This is achieved by implementing a numerical code, developed in the MATLAB environment, presented here in its main features and tested with literature examples. The procedure has been conceived under flexibility and efficiency criteria in order to ensure its adaptability to different fields of engineering; it has been applied to different case studies related to flow and transport in porous media. Each application is associated with innovative elements such as (i) new analytical formulations describing motion and displacement of non-Newtonian fluids in porous media, (ii) application of global sensitivity analysis to a high-complexity numerical model inspired by a real case of risk of radionuclide migration in the subsurface environment, and (iii) development of a novel sensitivity-based strategy for parameter calibration and experiment design in laboratory scale tracer transport.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A model is developed to represent the activity of a farm using the method of linear programming. Two are the main components of the model, the balance of soil fertility and the livestock nutrition. According to the first, the farm is supposed to have a total requirement of nitrogen, which is to be accomplished either through internal sources (manure) or through external sources (fertilisers). The second component describes the animal husbandry as having a nutritional requirement which must be satisfied through the internal production of arable crops or the acquisition of feed from the market. The farmer is supposed to maximise total net income from the agricultural and the zoo-technical activities by choosing one rotation among those available for climate and acclivity. The perspective of the analysis is one of a short period: the structure of the farm is supposed to be fixed without possibility to change the allocation of permanent crops and the amount of animal husbandry. The model is integrated with an environmental module that describes the role of the farm within the carbon-nitrogen cycle. On the one hand the farm allows storing carbon through the photosynthesis of the plants and the accumulation of carbon in the soil; on the other some activities of the farm emit greenhouse gases into the atmosphere. The model is tested for some representative farms of the Emilia-Romagna region, showing to be capable to give different results for conventional and organic farming and providing first results concerning the different atmospheric impact. Relevant data about the representative farms and the feasible rotations are extracted from the FADN database, with an integration of the coefficients from the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die transmembrane Potenzialdifferenz Δφm ist direkt mit der katalytischen Aktivität der Cytochrom c Oxidase (CcO) verknüpft. Die CcO ist das terminale Enzym (Komplex IV) in der Atmungskette der Mitochondrien. Das Enzym katalysiert die Reduktion von O2 zu 2 H2O. Dabei werden Elektronen vom natürlichen Substrat Cytochrom c zur CcO übertragen. Der Eleltronentransfer innerhalb der CcO ist an die Protonentranslokation über die Membran gekoppelt. Folglich bildet sich über der inneren Membrane der Mitochondrien eine Differenz in der Protonenkonzentration. Zusätzlich wird eine Potenzialdifferenz Δφm generiert.rnrnDas Transmembranpotenzial Δφm kann mit Hilfe der Fluoreszenzspektroskopie unter Einsatz eines potenzialemfindlichen Farbstoffs gemessen werden. Um quantitative Aussagen aus solchen Untersuchungen ableiten zu können, müssen zuvor Kalibrierungsmessungen am Membransystem durchgeführt werden.rnrnIn dieser Arbeit werden Kalibrierungsmessungen von Δφm in einer Modellmembrane mit inkorporiertem CcO vorgestellt. Dazu wurde ein biomimetisches Membransystem, die Proteinverankerte Doppelschicht (protein-tethered Bilayer Lipid Membrane, ptBLM), auf einem transparenten, leitfähigem Substrat (Indiumzinnoxid, ITO) entwickelt. ITO ermöglicht den simultanen Einsatz von elektrochemischen und Fluoreszenz- oder optischen wellenleiterspektroskopischen Methoden. Das Δφm in der ptBLM wurde durch extern angelegte, definierte elektrische Spannungen induziert. rnrnEine dünne Hydrogelschicht wurde als "soft cushion" für die ptBLM auf ITO eingesetzt. Das Polymernetzwerk enthält die NTA Funktionsgruppen zur orientierten Immobilisierung der CcO auf der Oberfläche der Hydrogels mit Hilfe der Ni-NTA Technik. Die ptBLM wurde nach der Immobilisierung der CcO mittels in-situ Dialyse gebildet. Elektrochemische Impedanzmessungen zeigten einen hohen elektrischen Widerstand (≈ 1 MΩ) der ptBLM. Optische Wellenleiterspektren (SPR / OWS) zeigten eine erhöhte Anisotropie des Systems nach der Bildung der Doppellipidschicht. Cyklovoltammetriemessungen von reduziertem Cytochrom c bestätigten die Aktivität der CcO in der Hydrogel-gestützten ptBLM. Das Membranpotenzial in der Hydrogel-gestützten ptBLM, induziert durch definierte elektrische Spannungen, wurde mit Hilfe der ratiometrischen Fluoreszenzspektroskopie gemessen. Referenzmessungen mit einer einfach verankerten Dopplellipidschicht (tBLM) lieferten einen Umrechnungsfaktor zwischen dem ratiometrischen Parameter Rn und dem Membranpotenzial (0,05 / 100 mV). Die Nachweisgrenze für das Membranpotenzial in einer Hydrogel-gestützten ptBLM lag bei ≈ 80 mV. Diese Daten dienen als gute Grundlage für künftige Untersuchungen des selbstgenerierten Δφm der CcO in einer ptBLM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During my PhD,I have been develop an innovative technique to reproduce in vitro the 3D thymic microenvironment, to be used for growth and differentiation of thymocytes, and possible transplantation replacement in conditions of depressed thymic immune regulation. The work has been developed in the laboratory of Tissue Engineering at the University Hospital in Basel, Switzerland, under the tutorship of Prof.Ivan Martin. Since a number of studies have suggested that the 3D structure of the thymic microenvironment might play a key role in regulating the survival and functional competence of thymocytes, I’ve focused my effort on the isolation and purification of the extracellular matrix of the mouse thymus. Specifically, based on the assumption that TEC can favour the differentiation of pre-T lymphocytes, I’ve developed a specific decellularization protocol to obtain the intact, DNA-free extracellular matrix of the adult mouse thymus. Two different protocols satisfied the main characteristics of a decellularized matrix, according to qualitative and quantitative assays. In particular, the quantity of DNA was less than 10% in absolute value, no positive staining for cells was found and the 3D structure and composition of the ECM were maintained. In addition, I was able to prove that the decellularized matrixes were not cytotoxic for the cells themselves, and were able to increase expression of MHC II antigens compared to control cells grown in standard conditions. I was able to prove that TECs grow and proliferate up to ten days on top the decellularized matrix. After a complete characterization of the culture system, these innovative natural scaffolds could be used to improve the standard culture conditions of TEC, to study in vitro the action of different factors on their differentiation genes, and to test the ability of TECs to induce in vitro maturation of seeded T lymphocytes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research hypothesis of the thesis is that “an open participation in the co-creation of the services and environments, makes life easier for vulnerable groups”; assuming that the participatory and emancipatory approaches are processes of possible actions and changes aimed at facilitating people’s lives. The adoption of these approaches is put forward as the common denominator of social innovative practices that supporting inclusive processes allow a shift from a medical model to a civil and human rights approach to disability. The theoretical basis of this assumption finds support in many principles of Inclusive Education and the main focus of the hypothesis of research is on participation and emancipation as approaches aimed at facing emerging and existing problems related to inclusion. The framework of reference for the research is represented by the perspectives adopted by several international documents concerning policies and interventions to promote and support the leadership and participation of vulnerable groups. In the first part an in-depth analysis of the main academic publications on the central themes of the thesis has been carried out. After investigating the framework of reference, the analysis focuses on the main tools of participatory and emancipatory approaches, which are able to connect with the concepts of active citizenship and social innovation. In the second part two case studies concerning participatory and emancipatory approaches in the areas of concern are presented and analyzed as example of the improvement of inclusion, through the involvement and participation of persons with disability. The research has been developed using a holistic and interdisciplinary approach, aimed at providing a knowledge-base that fosters a shift from a situation of passivity and care towards a new scenario based on the person’s commitment in the elaboration of his/her own project of life.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The atmosphere is a global influence on the movement of heat and humidity between the continents, and thus significantly affects climate variability. Information about atmospheric circulation are of major importance for the understanding of different climatic conditions. Dust deposits from maar lakes and dry maars from the Eifel Volcanic Field (Germany) are therefore used as proxy data for the reconstruction of past aeolian dynamics.rnrnIn this thesis past two sediment cores from the Eifel region are examined: the core SM3 from Lake Schalkenmehren and the core DE3 from the Dehner dry maar. Both cores contain the tephra of the Laacher See eruption, which is dated to 12,900 before present. Taken together the cores cover the last 60,000 years: SM3 the Holocene and DE3 the marine isotope stages MIS-3 and MIS-2, respectively. The frequencies of glacial dust storm events and their paleo wind direction are detected by high resolution grain size and provenance analysis of the lake sediments. Therefore two different methods are applied: geochemical measurements of the sediment using µXRF-scanning and the particle analysis method RADIUS (rapid particle analysis of digital images by ultra-high-resolution scanning of thin sections).rnIt is shown that single dust layers in the lake sediment are characterized by an increased content of aeolian transported carbonate particles. The limestone-bearing Eifel-North-South zone is the most likely source for the carbonate rich aeolian dust in the lake sediments of the Dehner dry maar. The dry maar is located on the western side of the Eifel-North-South zone. Thus, carbonate rich aeolian sediment is most likely to be transported towards the Dehner dry maar within easterly winds. A methodology is developed which limits the detection to the aeolian transported carbonate particles in the sediment, the RADIUS-carbonate module.rnrnIn summary, during the marine isotope stage MIS-3 the storm frequency and the east wind frequency are both increased in comparison to MIS-2. These results leads to the suggestion that atmospheric circulation was affected by more turbulent conditions during MIS-3 in comparison to the more stable atmospheric circulation during the full glacial conditions of MIS-2.rnThe results of the investigations of the dust records are finally evaluated in relation a study of atmospheric general circulation models for a comprehensive interpretation. Here, AGCM experiments (ECHAM3 and ECHAM4) with different prescribed SST patterns are used to develop a synoptic interpretation of long-persisting east wind conditions and of east wind storm events, which are suggested to lead to an enhanced accumulation of sediment being transported by easterly winds to the proxy site of the Dehner dry maar.rnrnThe basic observations made on the proxy record are also illustrated in the 10 m-wind vectors in the different model experiments under glacial conditions with different prescribed sea surface temperature patterns. Furthermore, the analysis of long-persisting east wind conditions in the AGCM data shows a stronger seasonality under glacial conditions: all the different experiments are characterized by an increase of the relative importance of the LEWIC during spring and summer. The different glacial experiments consistently show a shift from a long-lasting high over the Baltic Sea towards the NW, directly above the Scandinavian Ice Sheet, together with contemporary enhanced westerly circulation over the North Atlantic.rnrnThis thesis is a comprehensive analysis of atmospheric circulation patterns during the last glacial period. It has been possible to reconstruct important elements of the glacial paleo climate in Central Europe. While the proxy data from sediment cores lead to a binary signal of the wind direction changes (east versus west wind), a synoptic interpretation using atmospheric circulation models is successful. This shows a possible distribution of high and low pressure areas and thus the direction and strength of wind fields which have the capacity to transport dust. In conclusion, the combination of numerical models, to enhance understanding of processes in the climate system, with proxy data from the environmental record is the key to a comprehensive approach to paleo climatic reconstruction.rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The instability of river bank can result in considerable human and land losses. The Po river is the most important in Italy, characterized by main banks of significant and constantly increasing height. This study presents multilayer perceptron of artificial neural network (ANN) to construct prediction models for the stability analysis of river banks along the Po River, under various river and groundwater boundary conditions. For this aim, a number of networks of threshold logic unit are tested using different combinations of the input parameters. Factor of safety (FS), as an index of slope stability, is formulated in terms of several influencing geometrical and geotechnical parameters. In order to obtain a comprehensive geotechnical database, several cone penetration tests from the study site have been interpreted. The proposed models are developed upon stability analyses using finite element code over different representative sections of river embankments. For the validity verification, the ANN models are employed to predict the FS values of a part of the database beyond the calibration data domain. The results indicate that the proposed ANN models are effective tools for evaluating the slope stability. The ANN models notably outperform the derived multiple linear regression models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A field of computational neuroscience develops mathematical models to describe neuronal systems. The aim is to better understand the nervous system. Historically, the integrate-and-fire model, developed by Lapique in 1907, was the first model describing a neuron. In 1952 Hodgkin and Huxley [8] described the so called Hodgkin-Huxley model in the article “A Quantitative Description of Membrane Current and Its Application to Conduction and Excitation in Nerve”. The Hodgkin-Huxley model is one of the most successful and widely-used biological neuron models. Based on experimental data from the squid giant axon, Hodgkin and Huxley developed their mathematical model as a four-dimensional system of first-order ordinary differential equations. One of these equations characterizes the membrane potential as a process in time, whereas the other three equations depict the opening and closing state of sodium and potassium ion channels. The membrane potential is proportional to the sum of ionic current flowing across the membrane and an externally applied current. For various types of external input the membrane potential behaves differently. This thesis considers the following three types of input: (i) Rinzel and Miller [15] calculated an interval of amplitudes for a constant applied current, where the membrane potential is repetitively spiking; (ii) Aihara, Matsumoto and Ikegaya [1] said that dependent on the amplitude and the frequency of a periodic applied current the membrane potential responds periodically; (iii) Izhikevich [12] stated that brief pulses of positive and negative current with different amplitudes and frequencies can lead to a periodic response of the membrane potential. In chapter 1 the Hodgkin-Huxley model is introduced according to Izhikevich [12]. Besides the definition of the model, several biological and physiological notes are made, and further concepts are described by examples. Moreover, the numerical methods to solve the equations of the Hodgkin-Huxley model are presented which were used for the computer simulations in chapter 2 and chapter 3. In chapter 2 the statements for the three different inputs (i), (ii) and (iii) will be verified, and periodic behavior for the inputs (ii) and (iii) will be investigated. In chapter 3 the inputs are embedded in an Ornstein-Uhlenbeck process to see the influence of noise on the results of chapter 2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first part of this work deals with the inverse problem solution in the X-ray spectroscopy field. An original strategy to solve the inverse problem by using the maximum entropy principle is illustrated. It is built the code UMESTRAT, to apply the described strategy in a semiautomatic way. The application of UMESTRAT is shown with a computational example. The second part of this work deals with the improvement of the X-ray Boltzmann model, by studying two radiative interactions neglected in the current photon models. Firstly it is studied the characteristic line emission due to Compton ionization. It is developed a strategy that allows the evaluation of this contribution for the shells K, L and M of all elements with Z from 11 to 92. It is evaluated the single shell Compton/photoelectric ratio as a function of the primary photon energy. It is derived the energy values at which the Compton interaction becomes the prevailing process to produce ionization for the considered shells. Finally it is introduced a new kernel for the XRF from Compton ionization. In a second place it is characterized the bremsstrahlung radiative contribution due the secondary electrons. The bremsstrahlung radiation is characterized in terms of space, angle and energy, for all elements whit Z=1-92 in the energy range 1–150 keV by using the Monte Carlo code PENELOPE. It is demonstrated that bremsstrahlung radiative contribution can be well approximated with an isotropic point photon source. It is created a data library comprising the energetic distributions of bremsstrahlung. It is developed a new bremsstrahlung kernel which allows the introduction of this contribution in the modified Boltzmann equation. An example of application to the simulation of a synchrotron experiment is shown.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Until few years ago, 3D modelling was a topic confined into a professional environment. Nowadays technological innovations, the 3D printer among all, have attracted novice users to this application field. This sudden breakthrough was not supported by adequate software solutions. The 3D editing tools currently available do not assist the non-expert user during the various stages of generation, interaction and manipulation of 3D virtual models. This is mainly due to the current paradigm that is largely supported by two-dimensional input/output devices and strongly affected by obvious geometrical constraints. We have identified three main phases that characterize the creation and management of 3D virtual models. We investigated these directions evaluating and simplifying the classic editing techniques in order to propose more natural and intuitive tools in a pure 3D modelling environment. In particular, we focused on freehand sketch-based modelling to create 3D virtual models, interaction and navigation in a 3D modelling environment and advanced editing tools for free-form deformation and objects composition. To pursuing these goals we wondered how new gesture-based interaction technologies can be successfully employed in a 3D modelling environments, how we could improve the depth perception and the interaction in 3D environments and which operations could be developed to simplify the classical virtual models editing paradigm. Our main aims were to propose a set of solutions with which a common user can realize an idea in a 3D virtual model, drawing in the air just as he would on paper. Moreover, we tried to use gestures and mid-air movements to explore and interact in 3D virtual environment, and we studied simple and effective 3D form transformations. The work was carried out adopting the discrete representation of the models, thanks to its intuitiveness, but especially because it is full of open challenges.