17 resultados para Driving tests.

em Helda - Digital Repository of University of Helsinki


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the future the number of the disabled drivers requiring a special evaluation of their driving ability will increase due to the ageing population, as well as the progress of adaptive technology. This places pressure on the development of the driving evaluation system. Despite quite intensive research there is still no consensus concerning what is the factual situation in a driver evaluation (methodology), which measures should be included in an evaluation (methods), and how an evaluation has to be carried out (practise). In order to find answers to these questions we carried out empirical studies, and simultaneously elaborated upon a conceptual model for driving and a driving evaluation. The findings of empirical studies can be condensed into the following points: 1) A driving ability defined by the on-road driving test is associated with different laboratory measures depending on the study groups. Faults in the laboratory tests predicted faults in the on-road driving test in the novice group, whereas slowness in the laboratory predicted driving faults in the experienced drivers group. 2) The Parkinson study clearly showed that even an experienced clinician cannot reliably accomplish an evaluation of a disabled person’s driving ability without collaboration with other specialists. 3) The main finding of the stroke study was that the use of a multidisciplinary team as a source of information harmonises the specialists’ evaluations. 4) The patient studies demonstrated that the disabled persons themselves, as well as their spouses, are as a rule not reliable evaluators. 5) From the safety point of view, perceptible operations with the control devices are not crucial, but correct mental actions which the driver carries out with the help of the control devices are of greatest importance. 6) Personality factors including higher-order needs and motives, attitudes and a degree of self-awareness, particularly a sense of illness, are decisive when evaluating a disabled person’s driving ability. Personality is also the main source of resources concerning compensations for lower-order physical deficiencies and restrictions. From work with the conceptual model we drew the following methodological conclusions: First, the driver has to be considered as a holistic subject of the activity, as a multilevel hierarchically organised system of an organism, a temperament, an individuality, and a personality where the personality is the leading subsystem from the standpoint of safety. Second, driving as a human form of a sociopractical activity, is also a hierarchically organised dynamic system. Third, in an evaluation of driving ability it is a question of matching these two hierarchically organised structures: a subject of an activity and a proper activity. Fourth, an evaluation has to be person centred but not disease-, function- or method centred. On the basis of our study a multidisciplinary team (practitioner, driving school teacher, psychologist, occupational therapist) is recommended for use in demanding driver evaluations. Primary in a driver’s evaluations is a coherent conceptual model while concrete methods of evaluations may vary. However, the on-road test must always be performed if possible.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fatigue and sleepiness are major causes of road traffic accidents. However, precise data is often lacking because a validated and reliable device for detecting the level of sleepiness (cf. the breathalyzer for alcohol levels) does not exist, nor does criteria for the unambiguous detection of fatigue/sleepiness as a contributing factor in accident causation. Therefore, identification of risk factors and groups might not always be easy. Furthermore, it is extremely difficult to incorporate fatigue in operationalized terms into either traffic or criminal law. The main aims of this thesis were to estimate the prevalence of fatigue problems while driving among the Finnish driving population, to explore how VALT multidisciplinary investigation teams, Finnish police, and courts recognize (and prosecute) fatigue in traffic, to identify risk factors and groups, and finally to explore the application of the Finnish Road Traffic Act (RTA), which explicitly forbids driving while tired in Article 63. Several different sources of data were used: a computerized database and the original folders of multidisciplinary teams investigating fatal accidents (VALT), the driver records database (AKE), prosecutor and court decisions, a survey of young male military conscripts, and a survey of a representative sample of the Finnish active driving population. The results show that 8-15% of fatal accidents during 1991-2001 were fatigue related, that every fifth Finnish driver has fallen asleep while driving at some point during his/her driving career, and that the Finnish police and courts punish on average one driver per day on the basis of fatigued driving (based on the data from the years 2004-2005). The main finding regarding risk factors and risk groups is that during the summer months, especially in the afternoon, the risk of falling asleep while driving is increased. Furthermore, the results indicate that those with a higher risk of falling asleep while driving are men in general, but especially young male drivers including military conscripts and the elderly during the afternoon hours and the summer in particular; professional drivers breaking the rules about duty and rest hours; and drivers with a tendency to fall asleep easily. A time-of-day pattern of sleep-related incidents was repeatedly found. It was found that VALT teams can be considered relatively reliable when assessing the role of fatigue and sleepiness in accident causation; thus, similar experts might be valuable in the court process as expert witnesses when fatigue or sleepiness are suspected to have a role in an accident’s origins. However, the application of Article 63 of the RTA that forbids, among other things, fatigued driving will continue to be an issue that deserves further attention. This should be done in the context of a needed attitude change towards driving while in a state of extreme tiredness (e.g., after being awake for more than 24 hours), which produces performance deterioration comparable to illegal intoxication (BAC around 0.1%). Regarding the well-known interactive effect of increased sleepiness and even small alcohol levels, the relatively high proportion (up to 14.5%) of Finnish drivers owning and using a breathalyzer raises some concern. This concern exists because these drivers are obviously more focused on not breaking the “magic” line of 0.05% BAC than being concerned about driving impairment, which might be much worse than they realize because of the interactive effects of increased sleepiness and even low alcohol consumption. In conclusion, there is no doubt that fatigue and sleepiness problems while driving are common among the Finnish driving population. While we wait for the invention of reliable devices for fatigue/sleepiness detection, we should invest more effort in raising public awareness about the dangerousness of fatigued driving and educate drivers about how to recognize and deal with fatigue and sleepiness when they ultimately occur.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

ALICE (A Large Ion Collider Experiment) is an experiment at CERN (European Organization for Nuclear Research), where a heavy-ion detector is dedicated to exploit the unique physics potential of nucleus-nucleus interactions at LHC (Large Hadron Collider) energies. In a part of that project, 716 so-called type V4 modules were assembles in Detector Laboratory of Helsinki Institute of Physics during the years 2004 - 2006. Altogether over a million detector strips has made this project the most massive particle detector project in the science history of Finland. One ALICE SSD module consists of a double-sided silicon sensor, two hybrids containing 12 HAL25 front end readout chips and some passive components, such has resistors and capacitors. The components are connected together by TAB (Tape Automated Bonding) microcables. The components of the modules were tested in every assembly phase with comparable electrical tests to ensure the reliable functioning of the detectors and to plot the possible problems. The components were accepted or rejected by the limits confirmed by ALICE collaboration. This study is concentrating on the test results of framed chips, hybrids and modules. The total yield of the framed chips is 90.8%, hybrids 96.1% and modules 86.2%. The individual test results have been investigated in the light of the known error sources that appeared during the project. After solving the problems appearing during the learning-curve of the project, the material problems, such as defected chip cables and sensors, seemed to induce the most of the assembly rejections. The problems were typically seen in tests as too many individual channel failures. Instead, the bonding failures rarely caused the rejections of any component. One sensor type among three different sensor manufacturers has proven to have lower quality than the others. The sensors of this manufacturer are very noisy and their depletion voltage are usually outside of the specification given to the manufacturers. Reaching 95% assembling yield during the module production demonstrates that the assembly process has been highly successful.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

During the last 10-15 years interest in mouse behavioural analysis has evolved considerably. The driving force is development in molecular biological techniques that allow manipulation of the mouse genome by changing the expression of genes. Therefore, with some limitations it is possible to study how genes participate in regulation of physiological functions and to create models explaining genetic contribution to various pathological conditions. The first aim of our study was to establish a framework for behavioural phenotyping of genetically modified mice. We established comprehensive battery of tests for the initial screening of mutant mice. These included tests for exploratory and locomotor activity, emotional behaviour, sensory functions, and cognitive performance. Our interest was in the behavioural patterns of common background strains used for genetic manipulations in mice. Additionally we studied the behavioural effect of sex differences, test history, and individual housing. Our findings highlight the importance of careful consideration of genetic background for analysis of mutant mice. It was evident that some backgrounds may mask or modify the behavioural phenotype of mutants and thereby lead to false positive or negative findings. Moreover, there is no universal strain that is equally suitable for all tests, and using different backgrounds allows one to address possible phenotype modifying factors. We discovered that previous experience affected performance in several tasks. The most sensitive traits were the exploratory and emotional behaviour, as well as motor and nociceptive functions. Therefore, it may be essential to repeat some of the tests in naïve animals for assuring the phenotype. Social isolation for a long time period had strong effects on exploratory behaviour, but also on learning and memory. All experiments revealed significant interactions between strain and environmental factors (test history or housing condition) indicating genotype-dependent effects of environmental manipulations. Several mutant line analyses utilize this information. For example, we studied mice overexpressing as well as those lacking extracellular matrix protein heparin-binding growth-associated molecule (HB-GAM), and mice lacking N-syndecan (a receptor for HB-GAM). All mutant mice appeared to be fertile and healthy, without any apparent neurological or sensory defects. The lack of HB-GAM and N-syndecan, however, significantly reduced the learning capacity of the mice. On the other hand, overexpression of HB-GAM resulted in facilitated learning. Moreover, HB-GAM knockout mice displayed higher anxiety-like behaviour, whereas anxiety was reduced in HB-GAM overexpressing mice. Changes in hippocampal plasticity accompanied the behavioural phenotypes. We conclude that HB-GAM and N-syndecan are involved in the modulation of synaptic plasticity in hippocampus and play a role in regulation of anxiety- and learning-related behaviour.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis studies quantile residuals and uses different methodologies to develop test statistics that are applicable in evaluating linear and nonlinear time series models based on continuous distributions. Models based on mixtures of distributions are of special interest because it turns out that for those models traditional residuals, often referred to as Pearson's residuals, are not appropriate. As such models have become more and more popular in practice, especially with financial time series data there is a need for reliable diagnostic tools that can be used to evaluate them. The aim of the thesis is to show how such diagnostic tools can be obtained and used in model evaluation. The quantile residuals considered here are defined in such a way that, when the model is correctly specified and its parameters are consistently estimated, they are approximately independent with standard normal distribution. All the tests derived in the thesis are pure significance type tests and are theoretically sound in that they properly take the uncertainty caused by parameter estimation into account. -- In Chapter 2 a general framework based on the likelihood function and smooth functions of univariate quantile residuals is derived that can be used to obtain misspecification tests for various purposes. Three easy-to-use tests aimed at detecting non-normality, autocorrelation, and conditional heteroscedasticity in quantile residuals are formulated. It also turns out that these tests can be interpreted as Lagrange Multiplier or score tests so that they are asymptotically optimal against local alternatives. Chapter 3 extends the concept of quantile residuals to multivariate models. The framework of Chapter 2 is generalized and tests aimed at detecting non-normality, serial correlation, and conditional heteroscedasticity in multivariate quantile residuals are derived based on it. Score test interpretations are obtained for the serial correlation and conditional heteroscedasticity tests and in a rather restricted special case for the normality test. In Chapter 4 the tests are constructed using the empirical distribution function of quantile residuals. So-called Khmaladze s martingale transformation is applied in order to eliminate the uncertainty caused by parameter estimation. Various test statistics are considered so that critical bounds for histogram type plots as well as Quantile-Quantile and Probability-Probability type plots of quantile residuals are obtained. Chapters 2, 3, and 4 contain simulations and empirical examples which illustrate the finite sample size and power properties of the derived tests and also how the tests and related graphical tools based on residuals are applied in practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This doctoral thesis explores the development of drug markets and drug related crime in Finland since the mid 1990s, as well as public control measures aimed at solving problems related to drug crime. The research further examines the criminal career of persons having committed drug crime, as well as their socio-economic background. The period since the mid 1990s is, on the one hand, characterized by increasing use of drugs and increasingly severe drug problems. On the other hand, this period is also characterized by intensified drug control. Also criminality associated with drugs has increased and become more severe. During this period the prevention of drug problems became a focal issue for authorities, and resources were increased for activities geared towards fighting drugs. Along with this development, Finnish drug policy has been balancing between therapeutic activities and control. A focal point in this thesis is the question how society addresses drug problems, as well as how this differs from efforts to solve other problems. Why are criminal means so readily used when dealing with drug problems; why have the police received an extended mandate to use coercive force; and why has the field for imposing administrative sanctions been extended? How has the extension of drug control affected general thinking in criminal policy? The subject matter in this thesis is approached in a criminological and criminal policy perspective. The thesis is made up of four research articles and a Summary Article. In the Summary Article the studies were placed into the Finnish research context of drug criminality and drug control as well as criminal policy. Furthermore, the author has assessed his own research location as a drug control researcher. Applying the notion of risk, an analysis was made of threats posed by drugs to society. Theoretical perspectives were also brought to the fore on how society may regulate drug problems and threats associated with them. Based on research literature and administrative documents, an analysis was made of the relation between drug related social and health policy and criminal justice control. An account was also made of the development of drug control in Finland since the mid 1990s. There has been a strong increase in control by the criminal justice system since the mid 1990s. Penalties have been made more stringent, more efficient means have been developed to trace the financial gain from the offence, opportunities for money laundering have been prevented and the police has obtained ample new powers of inquiry. New administrative measures have been directed towards drug users, such as introducing drug tests in working life, checking the applicants criminal record for certain jobs, as well as the threat of losing one s driving licence in cases where a physician has established drug addiction. In the 1990s the prevention of drug crimes and their disclosure were made part of the police s control activities nationwide. This could clearly be seen in increased criminal statistics. There are humiliating elements associated with the police s drug control that should be eliminated for the benefit of everybody. Furthermore, the criminal control is directed towards persons in a weak socio-economic position. A drug verdict may set off a marginalization process that may be very difficult to halt. Drug control is selective and generates repressive practises. The special status accorded drug problems is also revealed in the way in which the treatment of drug addicts has developed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The TOTEM collaboration has developed and tested the first prototype of its Roman Pots to be operated in the LHC. TOTEM Roman Pots contain stacks of 10 silicon detectors with strips oriented in two orthogonal directions. To measure proton scattering angles of a few microradians, the detectors will approach the beam centre to a distance of 10 sigma + 0.5 mm (= 1.3 mm). Dead space near the detector edge is minimised by using two novel "edgeless" detector technologies. The silicon detectors are used both for precise track reconstruction and for triggering. The first full-sized prototypes of both detector technologies as well as their read-out electronics have been developed, built and operated. The tests took place first in a fixed-target muon beam at CERN's SPS, and then in the proton beam-line of the SPS accelerator ring. We present the test beam results demonstrating the successful functionality of the system despite slight technical shortcomings to be improved in the near future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most new drug molecules discovered today suffer from poor bioavailability. Poor oral bioavailability results mainly from poor dissolution properties of hydrophobic drug molecules, because the drug dissolution is often the rate-limiting event of the drug’s absorption through the intestinal wall into the systemic circulation. During the last few years, the use of mesoporous silica and silicon particles as oral drug delivery vehicles has been widely studied, and there have been promising results of their suitability to enhance the physicochemical properties of poorly soluble drug molecules. Mesoporous silica and silicon particles can be used to enhance the solubility and dissolution rate of a drug by incorporating the drug inside the pores, which are only a few times larger than the drug molecules, and thus, breaking the crystalline structure into a disordered, amorphous form with better dissolution properties. Also, the high surface area of the mesoporous particles improves the dissolution rate of the incorporated drug. In addition, the mesoporous materials can also enhance the permeability of large, hydrophilic drug substances across biological barriers. T he loading process of drugs into silica and silicon mesopores is mainly based on the adsorption of drug molecules from a loading solution into the silica or silicon pore walls. There are several factors that affect the loading process: the surface area, the pore size, the total pore volume, the pore geometry and surface chemistry of the mesoporous material, as well as the chemical nature of the drugs and the solvents. Furthermore, both the pore and the surface structure of the particles also affect the drug release kinetics. In this study, the loading of itraconazole into mesoporous silica (Syloid AL-1 and Syloid 244) and silicon (TOPSi and TCPSi) microparticles was studied, as well as the release of itraconazole from the microparticles and its stability after loading. Itraconazole was selected for this study because of its highly hydrophobic and poorly soluble nature. Different mesoporous materials with different surface structures, pore volumes and surface areas were selected in order to evaluate the structural effect of the particles on the loading degree and dissolution behaviour of the drug using different loading parameters. The loaded particles were characterized with various analytical methods, and the drug release from the particles was assessed by in vitro dissolution tests. The results showed that the loaded drug was apparently in amorphous form after loading, and that the loading process did not alter the chemical structure of the silica or silicon surface. Both the mesoporous silica and silicon microparticles enhanced the solubility and dissolution rate of itraconazole. Moreover, the physicochemical properties of the particles and the loading procedure were shown to have an effect on the drug loading efficiency and drug release kinetics. Finally, the mesoporous silicon particles loaded with itraconazole were found to be unstable under stressed conditions (at 38 qC and 70 % relative humidity).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modeling and forecasting of implied volatility (IV) is important to both practitioners and academics, especially in trading, pricing, hedging, and risk management activities, all of which require an accurate volatility. However, it has become challenging since the 1987 stock market crash, as implied volatilities (IVs) recovered from stock index options present two patterns: volatility smirk(skew) and volatility term-structure, if the two are examined at the same time, presents a rich implied volatility surface (IVS). This implies that the assumptions behind the Black-Scholes (1973) model do not hold empirically, as asset prices are mostly influenced by many underlying risk factors. This thesis, consists of four essays, is modeling and forecasting implied volatility in the presence of options markets’ empirical regularities. The first essay is modeling the dynamics IVS, it extends the Dumas, Fleming and Whaley (DFW) (1998) framework; for instance, using moneyness in the implied forward price and OTM put-call options on the FTSE100 index, a nonlinear optimization is used to estimate different models and thereby produce rich, smooth IVSs. Here, the constant-volatility model fails to explain the variations in the rich IVS. Next, it is found that three factors can explain about 69-88% of the variance in the IVS. Of this, on average, 56% is explained by the level factor, 15% by the term-structure factor, and the additional 7% by the jump-fear factor. The second essay proposes a quantile regression model for modeling contemporaneous asymmetric return-volatility relationship, which is the generalization of Hibbert et al. (2008) model. The results show strong negative asymmetric return-volatility relationship at various quantiles of IV distributions, it is monotonically increasing when moving from the median quantile to the uppermost quantile (i.e., 95%); therefore, OLS underestimates this relationship at upper quantiles. Additionally, the asymmetric relationship is more pronounced with the smirk (skew) adjusted volatility index measure in comparison to the old volatility index measure. Nonetheless, the volatility indices are ranked in terms of asymmetric volatility as follows: VIX, VSTOXX, VDAX, and VXN. The third essay examines the information content of the new-VDAX volatility index to forecast daily Value-at-Risk (VaR) estimates and compares its VaR forecasts with the forecasts of the Filtered Historical Simulation and RiskMetrics. All daily VaR models are then backtested from 1992-2009 using unconditional, independence, conditional coverage, and quadratic-score tests. It is found that the VDAX subsumes almost all information required for the volatility of daily VaR forecasts for a portfolio of the DAX30 index; implied-VaR models outperform all other VaR models. The fourth essay models the risk factors driving the swaption IVs. It is found that three factors can explain 94-97% of the variation in each of the EUR, USD, and GBP swaption IVs. There are significant linkages across factors, and bi-directional causality is at work between the factors implied by EUR and USD swaption IVs. Furthermore, the factors implied by EUR and USD IVs respond to each others’ shocks; however, surprisingly, GBP does not affect them. Second, the string market model calibration results show it can efficiently reproduce (or forecast) the volatility surface for each of the swaptions markets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study contributes to the executive stock option literature by looking at factors driving the introduction of such a compensation form on a firm level. Using a discrete decision model I test the explanatory power of several agency theory based variables and find strong support for predictability of the form of executive compensation. Ownership concentration and liquidity are found to have a significant negative effect on the probability of stock option adoption. Furtermore, I find evidence of CEO ownership, institutional ownership, investment intensity, and historical market return having a significant and a positive relationship to the likelihood of adopting a executive stock option program.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The likelihood ratio test of cointegration rank is the most widely used test for cointegration. Many studies have shown that its finite sample distribution is not well approximated by the limiting distribution. The article introduces and evaluates by Monte Carlo simulation experiments bootstrap and fast double bootstrap (FDB) algorithms for the likelihood ratio test. It finds that the performance of the bootstrap test is very good. The more sophisticated FDB produces a further improvement in cases where the performance of the asymptotic test is very unsatisfactory and the ordinary bootstrap does not work as well as it might. Furthermore, the Monte Carlo simulations provide a number of guidelines on when the bootstrap and FDB tests can be expected to work well. Finally, the tests are applied to US interest rates and international stock prices series. It is found that the asymptotic test tends to overestimate the cointegration rank, while the bootstrap and FDB tests choose the correct cointegration rank.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bootstrap likelihood ratio tests of cointegration rank are commonly used because they tend to have rejection probabilities that are closer to the nominal level than the rejection probabilities of the correspond- ing asymptotic tests. The e¤ect of bootstrapping the test on its power is largely unknown. We show that a new computationally inexpensive procedure can be applied to the estimation of the power function of the bootstrap test of cointegration rank. The bootstrap test is found to have a power function close to that of the level-adjusted asymp- totic test. The bootstrap test estimates the level-adjusted power of the asymptotic test highly accurately. The bootstrap test may have low power to reject the null hypothesis of cointegration rank zero, or underestimate the cointegration rank. An empirical application to Euribor interest rates is provided as an illustration of the findings.