7 resultados para macroscopic traffic flow models
em Helda - Digital Repository of University of Helsinki
Resumo:
Congestion of traffic is one of the biggest challenges for urban cities in global perspective. Car traffic and traffic jams are causing major problems and the congestion is predicted to worsen in the future. The greenhouse effect has caused a severe threat to the environment globally. On the other hand from the point of view of companies and other economic parties time and money has been lost because of the congestion of traffic. This work studies some possible traffic payment systems for the Helsinki Metropolitan area introducing three optional models and concentrating on the point of view of the economic parties. Central part of this work is formed by a research questionnaire, which was conducted among companies located in the Helsinki area and where more than 1000 responses were gained. The study researches the approaches of the respondents to the area s current traffic system, its development and urban congestion pricing and the answers are analyzed according to the size, industry and location of the companies. The economic aspect is studied by economic theory of industrial location and by emphasizing the meaning of smoothly running traffic for the economic world. Chapter three presents detailed information about traffic congestion, how today s car-centered society has been formed, what concrete things congestion means for economic life and how traffic congestion can be limited. Theoretically it is examined how urban traffic payment systems are working using examples from London and Stockholm where successful traffic payment experiences exist. The literature review analyzes urban development, increasing car traffic and Helsinki Metropolitan area on a structural point of view. The fourth chapter introduces a case study, which concentrates on Helsinki Metropolitan area s different structures, the congestion situation in Helsinki and the introduction of the traffic payment system clarification. Currently the region is experiencing a phase where big changes are happening in the planning of traffic. The traffic systems are being unified to consider the whole region in the future. Also different advices for the increasing traffic congestion problems are needed. Chapter five concentrates on the questionnaire and theme interviews and introduces the research findings. The respondents overall opinion of the traffic payments is quite skeptical. There were some regional differences found and especially taxi, bus and cargo and transit enterprises shared the most negative opinion. Economic parties were worried especially because of the traffic congestion is causing harm for the business travel and the employees traveling to and from work. According to the respondents the best option from the traffic payment models was the ring model where the payment places would be situated inside the Ring Road III. Both the company representatives and other key decision makers see public transportation as a good and powerful tool to decrease traffic congestion. The only question, which remains, is where to find investors willing to invest in public transportation if economic representatives do not believe in pricing the traffic by for example traffic payment systems.
Resumo:
The future use of genetically modified (GM) plants in food, feed and biomass production requires a careful consideration of possible risks related to the unintended spread of trangenes into new habitats. This may occur via introgression of the transgene to conventional genotypes, due to cross-pollination, and via the invasion of GM plants to new habitats. Assessment of possible environmental impacts of GM plants requires estimation of the level of gene flow from a GM population. Furthermore, management measures for reducing gene flow from GM populations are needed in order to prevent possible unwanted effects of transgenes on ecosystems. This work develops modeling tools for estimating gene flow from GM plant populations in boreal environments and for investigating the mechanisms of the gene flow process. To describe spatial dimensions of the gene flow, dispersal models are developed for the local and regional scale spread of pollen grains and seeds, with special emphasis on wind dispersal. This study provides tools for describing cross-pollination between GM and conventional populations and for estimating the levels of transgenic contamination of the conventional crops. For perennial populations, a modeling framework describing the dynamics of plants and genotypes is developed, in order to estimate the gene flow process over a sequence of years. The dispersal of airborne pollen and seeds cannot be easily controlled, and small amounts of these particles are likely to disperse over long distances. Wind dispersal processes are highly stochastic due to variation in atmospheric conditions, so that there may be considerable variation between individual dispersal patterns. This, in turn, is reflected to the large amount of variation in annual levels of cross-pollination between GM and conventional populations. Even though land-use practices have effects on the average levels of cross-pollination between GM and conventional fields, the level of transgenic contamination of a conventional crop remains highly stochastic. The demographic effects of a transgene have impacts on the establishment of trangenic plants amongst conventional genotypes of the same species. If the transgene gives a plant a considerable fitness advantage in comparison to conventional genotypes, the spread of transgenes to conventional population can be strongly increased. In such cases, dominance of the transgene considerably increases gene flow from GM to conventional populations, due to the enhanced fitness of heterozygous hybrids. The fitness of GM plants in conventional populations can be reduced by linking the selectively favoured primary transgene to a disfavoured mitigation transgene. Recombination between these transgenes is a major risk related to this technique, especially because it tends to take place amongst the conventional genotypes and thus promotes the establishment of invasive transgenic plants in conventional populations.
Resumo:
In cardiac myocytes (heart muscle cells), coupling of electric signal known as the action potential to contraction of the heart depends crucially on calcium-induced calcium release (CICR) in a microdomain known as the dyad. During CICR, the peak number of free calcium ions (Ca) present in the dyad is small, typically estimated to be within range 1-100. Since the free Ca ions mediate CICR, noise in Ca signaling due to the small number of free calcium ions influences Excitation-Contraction (EC) coupling gain. Noise in Ca signaling is only one noise type influencing cardiac myocytes, e.g., ion channels playing a central role in action potential propagation are stochastic machines, each of which gates more or less randomly, which produces gating noise present in membrane currents. How various noise sources influence macroscopic properties of a myocyte, how noise is attenuated and taken advantage of are largely open questions. In this thesis, the impact of noise on CICR, EC coupling and, more generally, macroscopic properties of a cardiac myocyte is investigated at multiple levels of detail using mathematical models. Complementarily to the investigation of the impact of noise on CICR, computationally-efficient yet spatially-detailed models of CICR are developed. The results of this thesis show that (1) gating noise due to the high-activity mode of L-type calcium channels playing a major role in CICR may induce early after-depolarizations associated with polymorphic tachycardia, which is a frequent precursor to sudden cardiac death in heart failure patients; (2) an increased level of voltage noise typically increases action potential duration and it skews distribution of action potential durations toward long durations in cardiac myocytes; and that (3) while a small number of Ca ions mediate CICR, Excitation-Contraction coupling is robust against this noise source, partly due to the shape of ryanodine receptor protein structures present in the cardiac dyad.
Resumo:
Wireless access is expected to play a crucial role in the future of the Internet. The demands of the wireless environment are not always compatible with the assumptions that were made on the era of the wired links. At the same time, new services that take advantage of the advances in many areas of technology are invented. These services include delivery of mass media like television and radio, Internet phone calls, and video conferencing. The network must be able to deliver these services with acceptable performance and quality to the end user. This thesis presents an experimental study to measure the performance of bulk data TCP transfers, streaming audio flows, and HTTP transfers which compete the limited bandwidth of the GPRS/UMTS-like wireless link. The wireless link characteristics are modeled with a wireless network emulator. We analyze how different competing workload types behave with regular TPC and how the active queue management, the Differentiated services (DiffServ), and a combination of TCP enhancements affect the performance and the quality of service. We test on four link types including an error-free link and the links with different Automatic Repeat reQuest (ARQ) persistency. The analysis consists of comparing the resulting performance in different configurations based on defined metrics. We observed that DiffServ and Random Early Detection (RED) with Explicit Congestion Notification (ECN) are useful, and in some conditions necessary, for quality of service and fairness because a long queuing delay and congestion related packet losses cause problems without DiffServ and RED. However, we observed situations, where there is still room for significant improvements if the link-level is aware of the quality of service. Only very error-prone link diminishes the benefits to nil. The combination of TCP enhancements improves performance. These include initial window of four, Control Block Interdependence (CBI) and Forward RTO recovery (F-RTO). The initial window of four helps a later starting TCP flow to start faster but generates congestion under some conditions. CBI prevents slow-start overshoot and balances slow start in the presence of error drops, and F-RTO reduces unnecessary retransmissions successfully.
Resumo:
Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.
Resumo:
This thesis is composed of an introductory chapter and four applications each of them constituting an own chapter. The common element underlying each of the chapters is the econometric methodology. The applications rely mostly on the leading econometric techniques related to estimation of causal effects. The first chapter introduces the econometric techniques that are employed in the remaining chapters. Chapter 2 studies the effects of shocking news on student performance. It exploits the fact that the school shooting in Kauhajoki in 2008 coincided with the matriculation examination period of that fall. It shows that the performance of men declined due to the news of the school shooting. For women the similar pattern remains unobserved. Chapter 3 studies the effects of minimum wage on employment by employing the original Card and Krueger (1994; CK) and Neumark and Wascher (2000; NW) data together with the changes-in-changes (CIC) estimator. As the main result it shows that the employment effect of an increase in the minimum wage is positive for small fast-food restaurants and negative for big fast-food restaurants. Therefore, it shows that the controversial positive employment effect reported by CK is overturned for big fast-food restaurants and that the NW data are shown, in contrast to their original results, to provide support for the positive employment effect. Chapter 4 employs the state-specific U.S. data (collected by Cohen and Einav [2003; CE]) on traffic fatalities to re-evaluate the effects of seat belt laws on the traffic fatalities by using the CIC estimator. It confirms the CE results that on the average an implementation of a mandatory seat belt law results in an increase in the seat belt usage rate and a decrease in the total fatality rate. In contrast to CE, it also finds evidence on compensating-behavior theory, which is observed especially in the states by the border of the U.S. Chapter 5 studies the life cycle consumption in Finland, with the special interest laid on the baby boomers and the older households. It shows that the baby boomers smooth their consumption over the life cycle more than other generations. It also shows that the old households smoothed their life cycle consumption more as a result of the recession in the 1990s, compared to young households.
Resumo:
The study presents a theory of utility models based on aspiration levels, as well as the application of this theory to the planning of timber flow economics. The first part of the study comprises a derivation of the utility-theoretic basis for the application of aspiration levels. Two basic models are dealt with: the additive and the multiplicative. Applied here solely for partial utility functions, aspiration and reservation levels are interpreted as defining piecewisely linear functions. The standpoint of the choices of the decision-maker is emphasized by the use of indifference curves. The second part of the study introduces a model for the management of timber flows. The model is based on the assumption that the decision-maker is willing to specify a shape of income flow which is different from that of the capital-theoretic optimum. The utility model comprises four aspiration-based compound utility functions. The theory and the flow model are tested numerically by computations covering three forest holdings. The results show that the additive model is sensitive even to slight changes in relative importances and aspiration levels. This applies particularly to nearly linear production possibility boundaries of monetary variables. The multiplicative model, on the other hand, is stable because it generates strictly convex indifference curves. Due to a higher marginal rate of substitution, the multiplicative model implies a stronger dependence on forest management than the additive function. For income trajectory optimization, a method utilizing an income trajectory index is more efficient than one based on the use of aspiration levels per management period. Smooth trajectories can be attained by squaring the deviations of the feasible trajectories from the desired one.