937 resultados para Inovation models in nets


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In non-linear random effects some attention has been very recently devoted to the analysis ofsuitable transformation of the response variables separately (Taylor 1996) or not (Oberg and Davidian 2000) from the transformations of the covariates and, as far as we know, no investigation has been carried out on the choice of link function in such models. In our study we consider the use of a random effect model when a parameterized family of links (Aranda-Ordaz 1981, Prentice 1996, Pregibon 1980, Stukel 1988 and Czado 1997) is introduced. We point out the advantages and the drawbacks associated with the choice of this data-driven kind of modeling. Difficulties in the interpretation of regression parameters, and therefore in understanding the influence of covariates, as well as problems related to loss of efficiency of estimates and overfitting, are discussed. A case study on radiotherapy usage in breast cancer treatment is discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Petri Nets are a formal, graphical and executable modeling technique for the specification and analysis of concurrent and distributed systems and have been widely applied in computer science and many other engineering disciplines. Low level Petri nets are simple and useful for modeling control flows but not powerful enough to define data and system functionality. High level Petri nets (HLPNs) have been developed to support data and functionality definitions, such as using complex structured data as tokens and algebraic expressions as transition formulas. Compared to low level Petri nets, HLPNs result in compact system models that are easier to be understood. Therefore, HLPNs are more useful in modeling complex systems. ^ There are two issues in using HLPNs—modeling and analysis. Modeling concerns the abstracting and representing the systems under consideration using HLPNs, and analysis deals with effective ways study the behaviors and properties of the resulting HLPN models. In this dissertation, several modeling and analysis techniques for HLPNs are studied, which are integrated into a framework that is supported by a tool. ^ For modeling, this framework integrates two formal languages: a type of HLPNs called Predicate Transition Net (PrT Net) is used to model a system's behavior and a first-order linear time temporal logic (FOLTL) to specify the system's properties. The main contribution of this dissertation with regard to modeling is to develop a software tool to support the formal modeling capabilities in this framework. ^ For analysis, this framework combines three complementary techniques, simulation, explicit state model checking and bounded model checking (BMC). Simulation is a straightforward and speedy method, but only covers some execution paths in a HLPN model. Explicit state model checking covers all the execution paths but suffers from the state explosion problem. BMC is a tradeoff as it provides a certain level of coverage while more efficient than explicit state model checking. The main contribution of this dissertation with regard to analysis is adapting BMC to analyze HLPN models and integrating the three complementary analysis techniques in a software tool to support the formal analysis capabilities in this framework. ^ The SAMTools developed for this framework in this dissertation integrates three tools: PIPE+ for HLPNs behavioral modeling and simulation, SAMAT for hierarchical structural modeling and property specification, and PIPE+Verifier for behavioral verification.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Human use of the oceans is increasingly in conflict with conservation of endangered species. Methods for managing the spatial and temporal placement of industries such as military, fishing, transportation and offshore energy, have historically been post hoc; i.e. the time and place of human activity is often already determined before assessment of environmental impacts. In this dissertation, I build robust species distribution models in two case study areas, US Atlantic (Best et al. 2012) and British Columbia (Best et al. 2015), predicting presence and abundance respectively, from scientific surveys. These models are then applied to novel decision frameworks for preemptively suggesting optimal placement of human activities in space and time to minimize ecological impacts: siting for offshore wind energy development, and routing ships to minimize risk of striking whales. Both decision frameworks relate the tradeoff between conservation risk and industry profit with synchronized variable and map views as online spatial decision support systems.

For siting offshore wind energy development (OWED) in the U.S. Atlantic (chapter 4), bird density maps are combined across species with weights of OWED sensitivity to collision and displacement and 10 km2 sites are compared against OWED profitability based on average annual wind speed at 90m hub heights and distance to transmission grid. A spatial decision support system enables toggling between the map and tradeoff plot views by site. A selected site can be inspected for sensitivity to a cetaceans throughout the year, so as to capture months of the year which minimize episodic impacts of pre-operational activities such as seismic airgun surveying and pile driving.

Routing ships to avoid whale strikes (chapter 5) can be similarly viewed as a tradeoff, but is a different problem spatially. A cumulative cost surface is generated from density surface maps and conservation status of cetaceans, before applying as a resistance surface to calculate least-cost routes between start and end locations, i.e. ports and entrance locations to study areas. Varying a multiplier to the cost surface enables calculation of multiple routes with different costs to conservation of cetaceans versus cost to transportation industry, measured as distance. Similar to the siting chapter, a spatial decisions support system enables toggling between the map and tradeoff plot view of proposed routes. The user can also input arbitrary start and end locations to calculate the tradeoff on the fly.

Essential to the input of these decision frameworks are distributions of the species. The two preceding chapters comprise species distribution models from two case study areas, U.S. Atlantic (chapter 2) and British Columbia (chapter 3), predicting presence and density, respectively. Although density is preferred to estimate potential biological removal, per Marine Mammal Protection Act requirements in the U.S., all the necessary parameters, especially distance and angle of observation, are less readily available across publicly mined datasets.

In the case of predicting cetacean presence in the U.S. Atlantic (chapter 2), I extracted datasets from the online OBIS-SEAMAP geo-database, and integrated scientific surveys conducted by ship (n=36) and aircraft (n=16), weighting a Generalized Additive Model by minutes surveyed within space-time grid cells to harmonize effort between the two survey platforms. For each of 16 cetacean species guilds, I predicted the probability of occurrence from static environmental variables (water depth, distance to shore, distance to continental shelf break) and time-varying conditions (monthly sea-surface temperature). To generate maps of presence vs. absence, Receiver Operator Characteristic (ROC) curves were used to define the optimal threshold that minimizes false positive and false negative error rates. I integrated model outputs, including tables (species in guilds, input surveys) and plots (fit of environmental variables, ROC curve), into an online spatial decision support system, allowing for easy navigation of models by taxon, region, season, and data provider.

For predicting cetacean density within the inner waters of British Columbia (chapter 3), I calculated density from systematic, line-transect marine mammal surveys over multiple years and seasons (summer 2004, 2005, 2008, and spring/autumn 2007) conducted by Raincoast Conservation Foundation. Abundance estimates were calculated using two different methods: Conventional Distance Sampling (CDS) and Density Surface Modelling (DSM). CDS generates a single density estimate for each stratum, whereas DSM explicitly models spatial variation and offers potential for greater precision by incorporating environmental predictors. Although DSM yields a more relevant product for the purposes of marine spatial planning, CDS has proven to be useful in cases where there are fewer observations available for seasonal and inter-annual comparison, particularly for the scarcely observed elephant seal. Abundance estimates are provided on a stratum-specific basis. Steller sea lions and harbour seals are further differentiated by ‘hauled out’ and ‘in water’. This analysis updates previous estimates (Williams & Thomas 2007) by including additional years of effort, providing greater spatial precision with the DSM method over CDS, novel reporting for spring and autumn seasons (rather than summer alone), and providing new abundance estimates for Steller sea lion and northern elephant seal. In addition to providing a baseline of marine mammal abundance and distribution, against which future changes can be compared, this information offers the opportunity to assess the risks posed to marine mammals by existing and emerging threats, such as fisheries bycatch, ship strikes, and increased oil spill and ocean noise issues associated with increases of container ship and oil tanker traffic in British Columbia’s continental shelf waters.

Starting with marine animal observations at specific coordinates and times, I combine these data with environmental data, often satellite derived, to produce seascape predictions generalizable in space and time. These habitat-based models enable prediction of encounter rates and, in the case of density surface models, abundance that can then be applied to management scenarios. Specific human activities, OWED and shipping, are then compared within a tradeoff decision support framework, enabling interchangeable map and tradeoff plot views. These products make complex processes transparent for gaming conservation, industry and stakeholders towards optimal marine spatial management, fundamental to the tenets of marine spatial planning, ecosystem-based management and dynamic ocean management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Petri Nets are a formal, graphical and executable modeling technique for the specification and analysis of concurrent and distributed systems and have been widely applied in computer science and many other engineering disciplines. Low level Petri nets are simple and useful for modeling control flows but not powerful enough to define data and system functionality. High level Petri nets (HLPNs) have been developed to support data and functionality definitions, such as using complex structured data as tokens and algebraic expressions as transition formulas. Compared to low level Petri nets, HLPNs result in compact system models that are easier to be understood. Therefore, HLPNs are more useful in modeling complex systems. There are two issues in using HLPNs - modeling and analysis. Modeling concerns the abstracting and representing the systems under consideration using HLPNs, and analysis deals with effective ways study the behaviors and properties of the resulting HLPN models. In this dissertation, several modeling and analysis techniques for HLPNs are studied, which are integrated into a framework that is supported by a tool. For modeling, this framework integrates two formal languages: a type of HLPNs called Predicate Transition Net (PrT Net) is used to model a system's behavior and a first-order linear time temporal logic (FOLTL) to specify the system's properties. The main contribution of this dissertation with regard to modeling is to develop a software tool to support the formal modeling capabilities in this framework. For analysis, this framework combines three complementary techniques, simulation, explicit state model checking and bounded model checking (BMC). Simulation is a straightforward and speedy method, but only covers some execution paths in a HLPN model. Explicit state model checking covers all the execution paths but suffers from the state explosion problem. BMC is a tradeoff as it provides a certain level of coverage while more efficient than explicit state model checking. The main contribution of this dissertation with regard to analysis is adapting BMC to analyze HLPN models and integrating the three complementary analysis techniques in a software tool to support the formal analysis capabilities in this framework. The SAMTools developed for this framework in this dissertation integrates three tools: PIPE+ for HLPNs behavioral modeling and simulation, SAMAT for hierarchical structural modeling and property specification, and PIPE+Verifier for behavioral verification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ground-source heat pump (GSHP) systems represent one of the most promising techniques for heating and cooling in buildings. These systems use the ground as a heat source/sink, allowing a better efficiency thanks to the low variations of the ground temperature along the seasons. The ground-source heat exchanger (GSHE) then becomes a key component for optimizing the overall performance of the system. Moreover, the short-term response related to the dynamic behaviour of the GSHE is a crucial aspect, especially from a regulation criteria perspective in on/off controlled GSHP systems. In this context, a novel numerical GSHE model has been developed at the Instituto de Ingeniería Energética, Universitat Politècnica de València. Based on the decoupling of the short-term and the long-term response of the GSHE, the novel model allows the use of faster and more precise models on both sides. In particular, the short-term model considered is the B2G model, developed and validated in previous research works conducted at the Instituto de Ingeniería Energética. For the long-term, the g-function model was selected, since it is a previously validated and widely used model, and presents some interesting features that are useful for its combination with the B2G model. The aim of the present paper is to describe the procedure of combining these two models in order to obtain a unique complete GSHE model for both short- and long-term simulation. The resulting model is then validated against experimental data from a real GSHP installation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When studying a biological regulatory network, it is usual to use boolean network models. In these models, boolean variables represent the behavior of each component of the biological system. Taking in account that the size of these state transition models grows exponentially along with the number of components considered, it becomes important to have tools to minimize such models. In this paper, we relate bisimulations, which are relations used in the study of automata (general state transition models) with attractors, which are an important feature of biological boolean models. Hence, we support the idea that bisimulations can be important tools in the study some main features of boolean network models.We also discuss the differences between using this approach and other well-known methodologies to study this kind of systems and we illustrate it with some examples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this paper is to contribute to the debate on corporate governance models in European transition economies. The paper consists of four parts. After a historic overview of the evolution of corporate governance, the introduction presents various understandings of the corporate governance function and describes current issues in corporate governance. Part two deals with governance systems in the (mainly domestically) privatized former state-owned companies in Central European transition countries, with the main types of company ownership structures, relationships between governing and management functions, and deficiencies in existing governance systems. Part three is dedicated to the analysis of factors that determine the efficiency of the relationship between the corporate governance and management functions in Central European transition economies. It deals with the issue of why the German (continental European) governance model is usually the preferred choice and why the chosen models underperform. In the conclusion the author offers his suggestions on how the Central European transition countries should improve their corporate governance in the future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this research was to apply a test that measures different multiple intelligences in children from two different elementary schools to determine whether there are differences between the Academicist Pedagogical Model (traditional approach) established by the Costa Rican Ministry of Public Education and the Cognitive Pedagogical Model (MPC) (constructivist approach). A total of 29 boys and 20 girls with ages 8 to 12 from two different public schools in Heredia (Laboratorio School and San Isidro School) participated in this study. The instrument used was a Multiple Intelligences Test for school age children (Vega, 2006), which consists of 15 items subdivided in seven categories: linguistic, logical-mathematical, visual, kinaesthetic, musical, interpersonal, and intrapersonal. Descriptive and inferential statistics (Two-Way ANOVA) were used for the analysis of data.  Significant differences were found in linguistic intelligence (F:9.47; p < 0.01) between the MPC school (3.24±1.24 points) and the academicist school (2.31±1.10 points).  Differences were also found between sex (F:5.26; p< 0.05), for girls (3.25±1.02 points) and boys (2.52±1.30 points). In addition, the musical intelligence showed significant statistical differences between sexes (F: 7.97; p < 0.05).  In conclusion, the learning pedagogical models in Costa Rican public schools must be updated based on the new learning trends.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The application of 3D grain-based modelling techniques is investigated in both small and large scale 3DEC models, in order to simulate brittle fracture processes in low-porosity crystalline rock. Mesh dependency in 3D grain-based models (GBMs) is examined through a number of cases to compare Voronoi and tetrahedral grain assemblages. Various methods are used in the generation of tessellations, each with a number of issues and advantages. A number of comparative UCS test simulations capture the distinct failure mechanisms, strength profiles, and progressive damage development using various Voronoi and tetrahedral GBMs. Relative calibration requirements are outlined to generate similar macro-strength and damage profiles for all the models. The results confirmed a number of inherent model behaviors that arise due to mesh dependency. In Voronoi models, inherent tensile failure mechanisms are produced by internal wedging and rotation of Voronoi grains. This results in a combined dependence on frictional and cohesive strength. In tetrahedral models, increased kinematic freedom of grains and an abundance of straight, connected failure pathways causes a preference for shear failure. This results in an inability to develop significant normal stresses causing cohesional strength dependence. In general, Voronoi models require high relative contact tensile strength values, with lower contact stiffness and contact cohesional strength compared to tetrahedral tessellations. Upscaling of 3D GBMs is investigated for both Voronoi and tetrahedral tessellations using a case study from the AECL’s Mine-by-Experiment at the Underground Research Laboratory. An upscaled tetrahedral model was able to reasonably simulate damage development in the roof forming a notch geometry by adjusting the cohesive strength. An upscaled Voronoi model underestimated the damage development in the roof and floor, and overestimated the damage in the side-walls. This was attributed to the discretization resolution limitations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

My PhD research period was focused on the anatomical, physiological and functional study of the gastrointestinal system on two different animal models. In two different contexts, the purpose of these two lines of research was contribute to understand how a specific genetic mutation or the adoption of a particular dietary supplement can affect gastrointestinal function. Functional gastrointestinal disorders are chronic conditions characterized by symptoms for which no organic cause can be found. Although symptoms are generally mild, a small subset of cases shows severe manifestations. This subset of patients may also have recurrent intestinal sub-occlusive episodes, but in absence of mechanical causes. This condition is referred to as chronic intestinal pseudo-obstruction, a rare, intractable chronic disease. Some mutations have been associated with CIPO. A novel causative RAD21 missense mutation was identified in a large consanguineous family, segregating a recessive form of CIPO. The present thesis was aimed to elucidate the mechanisms leading to neuropathy underlying CIPO via a recently developed conditional KI mouse carrying the RAD21 mutation. The experimental studies are based on the characterization and functional analysis of the conditional KI Rad21A626T mouse model. On the other hand aquaculture is increasing the global supply of foods. The species selected and feeds used affects the nutrients available from aquaculture, with a need to improve feed efficiency, both for economic and environmental reasons, but this will require novel innovative approaches. Nutritional strategies focused on the use of botanicals have attracted interest in animal production. Previous research indicates the positive results of using essential oils (EOs) as natural feed additives for several farmed animals. Therefore, the present study was designed to compare the effects of feed EO supplementation in two different forms (natural and composed of active ingredients obtained by synthesis) on the gastric mucosa in European sea bass.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, we have experienced increasing interest in the understanding of the physical properties of collisionless plasmas, mostly because of the large number of astrophysical environments (e. g. the intracluster medium (ICM)) containing magnetic fields that are strong enough to be coupled with the ionized gas and characterized by densities sufficiently low to prevent the pressure isotropization with respect to the magnetic line direction. Under these conditions, a new class of kinetic instabilities arises, such as firehose and mirror instabilities, which have been studied extensively in the literature. Their role in the turbulence evolution and cascade process in the presence of pressure anisotropy, however, is still unclear. In this work, we present the first statistical analysis of turbulence in collisionless plasmas using three-dimensional numerical simulations and solving double-isothermal magnetohydrodynamic equations with the Chew-Goldberger-Low laws closure (CGL-MHD). We study models with different initial conditions to account for the firehose and mirror instabilities and to obtain different turbulent regimes. We found that the CGL-MHD subsonic and supersonic turbulences show small differences compared to the MHD models in most cases. However, in the regimes of strong kinetic instabilities, the statistics, i.e. the probability distribution functions (PDFs) of density and velocity, are very different. In subsonic models, the instabilities cause an increase in the dispersion of density, while the dispersion of velocity is increased by a large factor in some cases. Moreover, the spectra of density and velocity show increased power at small scales explained by the high growth rate of the instabilities. Finally, we calculated the structure functions of velocity and density fluctuations in the local reference frame defined by the direction of magnetic lines. The results indicate that in some cases the instabilities significantly increase the anisotropy of fluctuations. These results, even though preliminary and restricted to very specific conditions, show that the physical properties of turbulence in collisionless plasmas, as those found in the ICM, may be very different from what has been largely believed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study aimed to describe and compare the ventilation behavior during an incremental test utilizing three mathematical models and to compare the feature of ventilation curve fitted by the best mathematical model between aerobically trained (TR) and untrained ( UT) men. Thirty five subjects underwent a treadmill test with 1 km.h(-1) increases every minute until exhaustion. Ventilation averages of 20 seconds were plotted against time and fitted by: bi-segmental regression model (2SRM); three-segmental regression model (3SRM); and growth exponential model (GEM). Residual sum of squares (RSS) and mean square error (MSE) were calculated for each model. The correlations between peak VO2 (VO2PEAK), peak speed (Speed(PEAK)), ventilatory threshold identified by the best model (VT2SRM) and the first derivative calculated for workloads below (moderate intensity) and above (heavy intensity) VT2SRM were calculated. The RSS and MSE for GEM were significantly higher (p < 0.01) than for 2SRM and 3SRM in pooled data and in UT, but no significant difference was observed among the mathematical models in TR. In the pooled data, the first derivative of moderate intensities showed significant negative correlations with VT2SRM (r = -0.58; p < 0.01) and Speed(PEAK) (r = -0.46; p < 0.05) while the first derivative of heavy intensities showed significant negative correlation with VT2SRM (r = -0.43; p < 0.05). In UT group the first derivative of moderate intensities showed significant negative correlations with VT2SRM (r = -0.65; p < 0.05) and Speed(PEAK) (r = -0.61; p < 0.05), while the first derivative of heavy intensities showed significant negative correlation with VT2SRM (r= -0.73; p < 0.01), Speed(PEAK) (r = -0.73; p < 0.01) and VO2PEAK (r = -0.61; p < 0.05) in TR group. The ventilation behavior during incremental treadmill test tends to show only one threshold. UT subjects showed a slower ventilation increase during moderate intensities while TR subjects showed a slower ventilation increase during heavy intensities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Human Papillomavirus, HPV, is the main etiological factor for cervical cancer. Different studies show that in women infected with HPV there is a positive correlation between lesion grade and number of infiltrating macrophages, as well as with IL-10 higher expression. Using a HPV16 associated tumor model in mice, TC-1, our laboratory has demonstrated that tumor infiltrating macrophages are M2-like, induce T cell regulatory phenotype and play an important role in tumor growth. M2 macrophages secrete several cytokines, among them IL-10, which has been shown to play a role in T cell suppression by tumor macrophages in other tumor models. In this work, we sought to establish if IL-10 is part of the mechanism by which HPV tumor associated macrophages induce T cell regulatory phenotype, inhibiting anti-tumor activity and facilitating tumor growth. Results: TC-1 tumor cells do not express or respond to IL-10, but recruit leukocytes which, within the tumor environment, produce this cytokine. Using IL-10 deficient mice or blocking IL-10 signaling with neutralizing antibodies, we observed a significant reduction in tumor growth, an increase in tumor infiltration by HPV16 E7 specific CD8 lymphocytes, including a population positive for Granzyme B and Perforin expression, and a decrease in the percentage of HPV specific regulatory T cells in the lymph nodes. Conclusions: Our data shows that in the HPV16 TC-1 tumor mouse model, IL-10 produced by tumor macrophages induce regulatory phenotype on T cells, an immune escape mechanism that facilitates tumor growth. Our results point to a possible mechanism behind the epidemiologic data that correlates higher IL-10 expression with risk of cervical cancer development in HPV infected women.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Placentas of guinea pig-related rodents are appropriate animal models for human placentation because of their striking similarities to those of humans. To optimize the pool of potential models in this context, it is essential to identify the occurrence of characters in close relatives. Methods: In this study we first analyzed chorioallantoic placentation in the prea, Galea spixii, as one of the guinea pig's closest relatives. Material was collected from a breeding group at the University of Mossoro, Brazil, including 18 individuals covering an ontogenetic sequence from initial pregnancy to term. Placentas were investigated by means of histology, electron microscopy, immunohistochemistry (vimentin, alpha-smooth muscle actin, cytokeration) and proliferation activity (PCNA). Results: Placentation in Galea is primarily characterized by an apparent regionalization into labyrinth, trophospongium and subplacenta. It also has associated growing processes with clusters of proliferating trophoblast cells at the placental margin, internally directed projections and a second centre of proliferation in the labyrinth. Finally, the subplacenta, which is temporarily supplied in parallel by the maternal and fetal blood systems, served as the center of origin for trophoblast invasion. Conclusion: Placentation in Galea reveals major parallels to the guinea pig and other caviomorphs with respect to the regionalization of the placenta, the associated growing processes, as well as trophoblast invasion. A principal difference compared to the guinea pig occurred in the blood supply of the subplacenta. Characteristics of the invasion and expanding processes indicate that Galea may serve as an additional animal model that is much smaller than the guinea pig and where the subplacenta partly has access to both maternal and fetal blood systems.