217 resultados para 480 Classical
Resumo:
Traffic safety studies demand more than what current micro-simulation models can provide as they presume that all drivers of motor vehicles exhibit safe behaviours. Several car-following models are used in various micro-simulation models. This research compares the mainstream car following models’ capabilities of emulating precise driver behaviour parameters such as headways and Time to Collisions. The comparison firstly illustrates which model is more robust in the metric reproduction. Secondly, the study conducted a series of sensitivity tests to further explore the behaviour of each model. Based on the outcome of these two steps exploration of the models, a modified structure and parameters adjustment for each car-following model is proposed to simulate more realistic vehicle movements, particularly headways and Time to Collision, below a certain critical threshold. NGSIM vehicle trajectory data is used to evaluate the modified models performance to assess critical safety events within traffic flow. The simulation tests outcomes indicate that the proposed modified models produce better frequency of critical Time to Collision than the generic models, while the improvement on the headway is not significant. The outcome of this paper facilitates traffic safety assessment using microscopic simulation.
Resumo:
Prostate cancer (CaP) is the second leading cause of cancer-related deaths in North American males and the most common newly diagnosed cancer in men world wide. Biomarkers are widely used for both early detection and prognostic tests for cancer. The current, commonly used biomarker for CaP is serum prostate specific antigen (PSA). However, the specificity of this biomarker is low as its serum level is not only increased in CaP but also in various other diseases, with age and even body mass index. Human body fluids provide an excellent resource for the discovery of biomarkers, with the advantage over tissue/biopsy samples of their ease of access, due to the less invasive nature of collection. However, their analysis presents challenges in terms of variability and validation. Blood and urine are two human body fluids commonly used for CaP research, but their proteomic analyses are limited both by the large dynamic range of protein abundance making detection of low abundance proteins difficult and in the case of urine, by the high salt concentration. To overcome these challenges, different techniques for removal of high abundance proteins and enrichment of low abundance proteins are used. Their applications and limitations are discussed in this review. A number of innovative proteomic techniques have improved detection of biomarkers. They include two dimensional differential gel electrophoresis (2D-DIGE), quantitative mass spectrometry (MS) and functional proteomic studies, i.e., investigating the association of post translational modifications (PTMs) such as phosphorylation, glycosylation and protein degradation. The recent development of quantitative MS techniques such as stable isotope labeling with amino acids in cell culture (SILAC), isobaric tags for relative and absolute quantitation (iTRAQ) and multiple reaction monitoring (MRM) have allowed proteomic researchers to quantitatively compare data from different samples. 2D-DIGE has greatly improved the statistical power of classical 2D gel analysis by introducing an internal control. This chapter aims to review novel CaP biomarkers as well as to discuss current trends in biomarker research from two angles: the source of biomarkers (particularly human body fluids such as blood and urine), and emerging proteomic approaches for biomarker research.
Resumo:
The authors present a qualitative and quantitative comparison of various similarity measures that form the kernel of common area-based stereo-matching systems. The authors compare classical difference and correlation measures as well as nonparametric measures based on the rank and census transforms for a number of outdoor images. For robotic applications, important considerations include robustness to image defects such as intensity variation and noise, the number of false matches, and computational complexity. In the absence of ground truth data, the authors compare the matching techniques based on the percentage of matches that pass the left-right consistency test. The authors also evaluate the discriminatory power of several match validity measures that are reported in the literature for eliminating false matches and for estimating match confidence. For guidance applications, it is essential to have and estimate of confidence in the three-dimensional points generated by stereo vision. Finally, a new validity measure, the rank constraint, is introduced that is capable of resolving ambiguous matches for rank transform-based matching.
Resumo:
The extraordinary event, for Deleuze, is the object becoming subject – not in the manner of an abstract formulation, such as the substitution of one ideational representation for another but, rather, in the introduction of a vast, new, impersonal plane of subjectivity, populated by object processes and physical phenomena that in Deleuze’s discovery will be shown to constitute their own subjectivities. Deleuze’s polemic of subjectivity (the refusal of the Cartesian subject and the transcendental ego of Husserl) – long attempted by other thinkers – is unique precisely because it heralds the dawning of a new species of objecthood that will qualify as its own peculiar subjectivity. A survey of Deleuze’s early work on subjectivity, Empirisme et subjectivité (Deleuze 1953), Le Bergsonisme (Deleuze 1968), and Logique du sens (Deleuze 1969), brings the architectural reader into a peculiar confrontation with what Deleuze calls the ‘new transcendental field’, the field of subjectproducing effects, which for the philosopher takes the place of both the classical and modern subject. Deleuze’s theory of consciousness and perception is premised on the critique of Husserlian phenomenology; and ipso facto his question is an architectural problematic, even if the name ‘architecture’ is not invoked...
Resumo:
National flag carriers are struggling for survival, not only due to classical reasons such as increase in fuel and tax or natural disasters, but largely due to the inability to quickly adapt to its competitive environment – the emergence of budget and Persian Gulf airlines. In this research, we investigate how airlines can transform their business models via technological and strategic capabilities to become profitable and sustainable passenger experience companies. To formulate recommendations, we analyze customer sentiments via social media to understand what people are saying about the airlines.
Resumo:
The only effective method of Fiber Bragg Grating (FBG) strain modulation has been by changing the distance between its two fixed ends. We demonstrate an alternative being more sensitive to force based on the nonlinear amplification relationship between a transverse force applied to a stretched string and its induced axial force. It may improve the sensitivity and size of an FBG force sensor, reduce the number of FBGs needed for multi-axial force monitoring, and control the resonant frequency of an FBG accelerometer.
Resumo:
Significant wheel-rail dynamic forces occur because of imperfections in the wheels and/or rail. One of the key responses to the transmission of these forces down through the track is impact force on the sleepers. Dynamic analysis of nonlinear systems is very complicated and does not lend itself easily to a classical solution of multiple equations. Trying to deduce the behaviour of track components from experimental data is very difficult because such data is hard to obtain and applies to only the particular conditions of the track being tested. The finite element method can be the best solution to this dilemma. This paper describes a finite element model using the software package ANSYS for various sized flat defects in the tread of a wheel rolling at a typical speed on heavy haul track. The paper explores the dynamic response of a prestressed concrete sleeper to these defects.
Resumo:
Readily accepted knowledge regarding crash causation is consistently omitted from efforts to model and subsequently understand motor vehicle crash occurrence and their contributing factors. For instance, distracted and impaired driving accounts for a significant proportion of crash occurrence, yet is rarely modeled explicitly. In addition, spatially allocated influences such as local law enforcement efforts, proximity to bars and schools, and roadside chronic distractions (advertising, pedestrians, etc.) play a role in contributing to crash occurrence and yet are routinely absent from crash models. By and large, these well-established omitted effects are simply assumed to contribute to model error, with predominant focus on modeling the engineering and operational effects of transportation facilities (e.g. AADT, number of lanes, speed limits, width of lanes, etc.) The typical analytical approach—with a variety of statistical enhancements—has been to model crashes that occur at system locations as negative binomial (NB) distributed events that arise from a singular, underlying crash generating process. These models and their statistical kin dominate the literature; however, it is argued in this paper that these models fail to capture the underlying complexity of motor vehicle crash causes, and thus thwart deeper insights regarding crash causation and prevention. This paper first describes hypothetical scenarios that collectively illustrate why current models mislead highway safety researchers and engineers. It is argued that current model shortcomings are significant, and will lead to poor decision-making. Exploiting our current state of knowledge of crash causation, crash counts are postulated to arise from three processes: observed network features, unobserved spatial effects, and ‘apparent’ random influences that reflect largely behavioral influences of drivers. It is argued; furthermore, that these three processes in theory can be modeled separately to gain deeper insight into crash causes, and that the model represents a more realistic depiction of reality than the state of practice NB regression. An admittedly imperfect empirical model that mixes three independent crash occurrence processes is shown to outperform the classical NB model. The questioning of current modeling assumptions and implications of the latent mixture model to current practice are the most important contributions of this paper, with an initial but rather vulnerable attempt to model the latent mixtures as a secondary contribution.
Resumo:
Since the discovery of the first receptor tyrosine kinase (RTK) proteins in the late 1970s and early 1980s, many scientists have explored the functions of these important cell signaling molecules. The finding that these proteins are often deregulated or mutated in diseases such as cancers and diabetes, together with their potential as clinical therapeutic targets, has further highlighted the necessity for understanding the signaling functions of these important proteins. The mechanisms of RTK regulation and function have been recently reviewed by Lemmon & Schlessinger (2010) but in this review we instead focus on the results of several recent studies that show receptor tyrosine kinases can function from subcellular localisations, including in particular the nucleus, in addition to their classical plasma membrane location. Nuclear localisation of receptor tyrosine kinases has been demonstrated to be important for normal cell function but is also believed to contribute to the pathogenesis of several human diseases.
Resumo:
The androgen receptor (AR) signaling pathway is a common therapeutic target for prostate cancer, because it is critical for the survival of both hormone-responsive and castrate-resistant tumor cells. Most of the detailed understanding that we have of AR transcriptional activation has been gained by studying classical target genes. For more than two decades, Kallikrein 3 (KLK3) (prostate-specific antigen) has been used as a prototypical AR target gene, because it is highly androgen responsive in prostate cancer cells. Three regions upstream of the KLK3 gene, including the distal enhancer, are known to contain consensus androgen-responsive elements required for AR-mediated transcriptional activation. Here, we show that KLK3 is one of a specific cluster of androgen-regulated genes at the centromeric end of the kallikrein locus with enhancers that evolved from the long terminal repeat (LTR) (LTR40a) of an endogenous retrovirus. Ligand-dependent recruitment of the AR to individual LTR-derived enhancers results in concurrent up-regulation of endogenous KLK2, KLK3, and KLKP1 expression in LNCaP prostate cancer cells. At the molecular level, a kallikrein-specific duplication within the LTR is required for maximal androgen responsiveness. Therefore, KLK3 represents a subset of target genes regulated by repetitive elements but is not typical of the whole spectrum of androgen-responsive transcripts. These data provide a novel and more detailed understanding of AR transcriptional activation and emphasize the importance of repetitive elements as functional regulatory units
Resumo:
We consider the space fractional advection–dispersion equation, which is obtained from the classical advection–diffusion equation by replacing the spatial derivatives with a generalised derivative of fractional order. We derive a finite volume method that utilises fractionally-shifted Grünwald formulae for the discretisation of the fractional derivative, to numerically solve the equation on a finite domain with homogeneous Dirichlet boundary conditions. We prove that the method is stable and convergent when coupled with an implicit timestepping strategy. Results of numerical experiments are presented that support the theoretical analysis.
Resumo:
This paper presents an efficient face detection method suitable for real-time surveillance applications. Improved efficiency is achieved by constraining the search window of an AdaBoost face detector to pre-selected regions. Firstly, the proposed method takes a sparse grid of sample pixels from the image to reduce whole image scan time. A fusion of foreground segmentation and skin colour segmentation is then used to select candidate face regions. Finally, a classifier-based face detector is applied only to selected regions to verify the presence of a face (the Viola-Jones detector is used in this paper). The proposed system is evaluated using 640 x 480 pixels test images and compared with other relevant methods. Experimental results show that the proposed method reduces the detection time to 42 ms, where the Viola-Jones detector alone requires 565 ms (on a desktop processor). This improvement makes the face detector suitable for real-time applications. Furthermore, the proposed method requires 50% of the computation time of the best competing method, while reducing the false positive rate by 3.2% and maintaining the same hit rate.
Resumo:
The preparedness theory of classical conditioning proposed by Seligman (1970, 1971) has been applied extensively over the past 40 years to explain the nature and "source" of human fear and phobias. In this review we examine the formative studies that tested the four defining characteristics of prepared learning with animal fear-relevant stimuli (typically snakes and spiders) and consider claims that fear of social stimuli, such as angry faces, or faces of racial out-group members, may also be acquired utilising the same preferential learning mechanism. Exposition of critical differences between fear learning to animal and social stimuli suggests that a single account cannot adequately explain fear learning with animal and social stimuli. We demonstrate that fear conditioned to social stimuli is less robust than fear conditioned to animal stimuli as it is susceptible to cognitive influence and propose that it may instead reflect on negative stereotypes and social norms. Thus, a theoretical model that can accommodate the influence of both biological and cultural factors is likely to have broader utility in the explanation of fear and avoidance responses than accounts based on a single mechanism.
Resumo:
This study examined relationships between competitive trait anxiety and coping strategies among ballet dancers. Participants were 104 classical dancers (81 females and 23 males) ranging in age from 15 to 35 years (mean 19.4 years; SD 3.8 years) from three professional ballet companies, two private dance schools, and two university dance courses in Australia. Participants completed the Modified COPE scale and the Sport Anxiety Scale. Trait anxiety scores, in particular for somatic anxiety and worry, were significant predictors of 7 of the 12 coping strategies (wishful thinking, r2 = 42.3%; selfblame, r2 = 35.7%; suppression of competing activities, r2 = 27.1%; venting of emotions, r2 = 23.2%; denial, r2 = 17.7%; effort, r2 = 16.6%; active coping, r2 = 14.3%). Approximately 96% of dancers could be classified correctly as high or low trait-anxious from their reported coping style. No significant effects of gender or status (professional versus students) were found. Findings showed that high trait-anxious athletes tend to use more maladaptive, emotion-focused coping strategies compared with low trait-anxious athletes; a tendency that has been proposed to lead to negative performance effects. Dancers who are by nature anxious about performance may need special attention to help them to learn to cope with performance-related stress. Med Probl Perform Art 18:59–64, 2003.
Resumo:
The pioneering work of Runge and Kutta a hundred years ago has ultimately led to suites of sophisticated numerical methods suitable for solving complex systems of deterministic ordinary differential equations. However, in many modelling situations, the appropriate representation is a stochastic differential equation and here numerical methods are much less sophisticated. In this paper a very general class of stochastic Runge-Kutta methods is presented and much more efficient classes of explicit methods than previous extant methods are constructed. In particular, a method of strong order 2 with a deterministic component based on the classical Runge-Kutta method is constructed and some numerical results are presented to demonstrate the efficacy of this approach.