463 resultados para Simon, HeinrichSimon, HeinrichHeinrichSimon


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper considers three fields of interest in the recording process: the performer and the song; the technology of the recording context; and the commercial ambitions of the record company, and positions the record producer as a nexus at the interface of all three. The author reports his structured recollection of several recordings that all achieved substantial commercial success. The processes are considered from the author’s perspective as the record producer, and from inception of the project to completion of the recorded work. What were the processes of engagement? Do the actions reported conform to the template of nexus? This paper proposes that in all recordings the function of producer/nexus is present and necessary—it exists in the interaction of the artistry and the technology¬––and is a useful paradigm for analysis of the recording process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research identifies residential mobility behaviour impacts of residential dissonance in Transit Oriented Developments (TODs) vs. non-TODs in Brisbane, Australia. Based on the characteristics of living environments (density, diversity, connectivity, and accessibility) and the travel preferences of 4545 individuals, respondents in 2009 were classified into one of four categories including: TOD consonants, TOD dissonants, non-TOD dissonants, and non-TOD consonants. Binary logistic regression analyses were employed to identify residential mobility behaviour of groups between 2009 and 2011; controlling for time varying covariates. The findings show that both TOD dissonants and TOD consonants move residences at an equal rate. However, TOD dissonants are more likely to move residences to their preferred non-TOD areas. In contrast, non-TOD dissonants not only moved residences at a lower rate, but their rate of mobility to their preferred TOD neighbourhood is also significantly lower due to costs and other associated factors. The findings suggest that discrete land use policy development is required to integrate non-TOD dissonant and TOD dissonant behaviours to support TOD development in Brisbane.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Readily accepted knowledge regarding crash causation is consistently omitted from efforts to model and subsequently understand motor vehicle crash occurrence and their contributing factors. For instance, distracted and impaired driving accounts for a significant proportion of crash occurrence, yet is rarely modeled explicitly. In addition, spatially allocated influences such as local law enforcement efforts, proximity to bars and schools, and roadside chronic distractions (advertising, pedestrians, etc.) play a role in contributing to crash occurrence and yet are routinely absent from crash models. By and large, these well-established omitted effects are simply assumed to contribute to model error, with predominant focus on modeling the engineering and operational effects of transportation facilities (e.g. AADT, number of lanes, speed limits, width of lanes, etc.) The typical analytical approach—with a variety of statistical enhancements—has been to model crashes that occur at system locations as negative binomial (NB) distributed events that arise from a singular, underlying crash generating process. These models and their statistical kin dominate the literature; however, it is argued in this paper that these models fail to capture the underlying complexity of motor vehicle crash causes, and thus thwart deeper insights regarding crash causation and prevention. This paper first describes hypothetical scenarios that collectively illustrate why current models mislead highway safety researchers and engineers. It is argued that current model shortcomings are significant, and will lead to poor decision-making. Exploiting our current state of knowledge of crash causation, crash counts are postulated to arise from three processes: observed network features, unobserved spatial effects, and ‘apparent’ random influences that reflect largely behavioral influences of drivers. It is argued; furthermore, that these three processes in theory can be modeled separately to gain deeper insight into crash causes, and that the model represents a more realistic depiction of reality than the state of practice NB regression. An admittedly imperfect empirical model that mixes three independent crash occurrence processes is shown to outperform the classical NB model. The questioning of current modeling assumptions and implications of the latent mixture model to current practice are the most important contributions of this paper, with an initial but rather vulnerable attempt to model the latent mixtures as a secondary contribution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The state of the practice in safety has advanced rapidly in recent years with the emergence of new tools and processes for improving selection of the most cost-effective safety countermeasures. However, many challenges prevent fair and objective comparisons of countermeasures applied across safety disciplines (e.g. engineering, emergency services, and behavioral measures). These countermeasures operate at different spatial scales, are funded often by different financial sources and agencies, and have associated costs and benefits that are difficult to estimate. This research proposes a methodology by which both behavioral and engineering safety investments are considered and compared in a specific local context. The methodology involves a multi-stage process that enables the analyst to select countermeasures that yield high benefits to costs, are targeted for a particular project, and that may involve costs and benefits that accrue over varying spatial and temporal scales. The methodology is illustrated using a case study from the Geary Boulevard Corridor in San Francisco, California. The case study illustrates that: 1) The methodology enables the identification and assessment of a wide range of safety investment types at the project level; 2) The nature of crash histories lend themselves to the selection of both behavioral and engineering investments, requiring cooperation across agencies; and 3) The results of the cost-benefit analysis are highly sensitive to cost and benefit assumptions, and thus listing and justification of all assumptions is required. It is recommended that a sensitivity analyses be conducted when there is large uncertainty surrounding cost and benefit assumptions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Stimulation of the androgen receptor via bioavailable androgens, including testosterone and testosterone metabolites, is a key driver of prostate development and the early stages of prostate cancer. Androgens are hydrophobic and as such require carrier proteins, including sex hormone-binding globulin (SHBG), to enable efficient distribution from sites of biosynthesis to target tissues. The similarly hydrophobic corticosteroids also require a carrier protein whose affinity for steroid is modulated by proteolysis. However, proteolytic mechanisms regulating the SHBG/androgen complex have not been reported. Here, we show that the cancer-associated serine proteases, kallikrein-related peptidase (KLK)4 and KLK14, bind strongly to SHBG in glutathione S-transferase interaction analyses. Further, we demonstrate that active KLK4 and KLK14 cleave human SHBG at unique sites and in an androgen-dependent manner. KLK4 separated androgen-free SHBG into its two laminin G-like (LG) domains that were subsequently proteolytically stable even after prolonged digestion, whereas a catalytically equivalent amount of KLK14 reduced SHBG to small peptide fragments over the same period. Conversely, proteolysis of 5α-dihydrotestosterone (DHT)-bound SHBG was similar for both KLKs and left the steroid binding LG4 domain intact. Characterization of this proteolysis fragment by [(3)H]-labeled DHT binding assays revealed that it retained identical affinity for androgen compared with full-length SHBG (dissociation constant = 1.92 nM). Consistent with this, both full-length SHBG and SHBG-LG4 significantly increased DHT-mediated transcriptional activity of the androgen receptor compared with DHT delivered without carrier protein. Collectively, these data provide the first evidence that SHBG is a target for proteolysis and demonstrate that a stable fragment derived from proteolysis of steroid-bound SHBG retains binding function in vitro.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The focus of governments on increasing active travel has motivated renewed interest in cycling safety. Bicyclists are up to 20 times more likely to be involved in serious injury crashes than drivers so understanding the relationship among factors in bicyclist crash risk is critically important for identifying effective policy tools, for informing bicycle infrastructure investments, and for identifying high risk bicycling contexts. This study aims to better understand the complex relationships between bicyclist self reported injuries resulting from crashes (e.g. hitting a car) and non-crashes (e.g. spraining an ankle) and perceived risk of cycling as a function of cyclist exposure, rider conspicuity, riding environment, rider risk aversion, and rider ability. Self reported data from 2,500 Queensland cyclists are used to estimate a series of seemingly unrelated regressions to examine the relationships among factors. The major findings suggest that perceived risk does not appear to influence injury rates, nor do injury rates influence perceived risks of cycling. Riders who perceive cycling as risky tend not to be commuters, do not engage in group riding, tend to always wear mandatory helmets and front lights, and lower their perception of risk by increasing days per week of riding and by increasing riding proportion on bicycle paths. Riders who always wear helmets have lower crash injury risk. Increasing the number of days per week riding tends to decrease both crash injury and non crash injury risk (e.g. a sprain). Further work is needed to replicate some of the findings in this study.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many methods exist at the moment for deformable face fitting. A drawback to nearly all these approaches is that they are (i) noisy in terms of landmark positions, and (ii) the noise is biased across frames (i.e. the misalignment is toward common directions across all frames). In this paper we propose a grouped $\mathcal{L}1$-norm anchored method for simultaneously aligning an ensemble of deformable face images stemming from the same subject, given noisy heterogeneous landmark estimates. Impressive alignment performance improvement and refinement is obtained using very weak initialization as "anchors".

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Student performance on examinations is influenced by the level of difficulty of the questions. It seems reasonable to propose therefore that assessment of the difficulty of exam questions could be used to gauge the level of skills and knowledge expected at the end of a course. This paper reports the results of a study investigating the difficulty of exam questions using a subjective assessment of difficulty and a purpose-built exam question complexity classification scheme. The scheme, devised for exams in introductory programming courses, assesses the complexity of each question using six measures: external domain references, explicitness, linguistic complexity, conceptual complexity, length of code involved in the question and/or answer, and intellectual complexity (Bloom level). We apply the scheme to 20 introductory programming exam papers from five countries, and find substantial variation across the exams for all measures. Most exams include a mix of questions of low, medium, and high difficulty, although seven of the 20 have no questions of high difficulty. All of the complexity measures correlate with assessment of difficulty, indicating that the difficulty of an exam question relates to each of these more specific measures. We discuss the implications of these findings for the development of measures to assess learning standards in programming courses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Educators are faced with many challenging questions in designing an effective curriculum. What prerequisite knowledge do students have before commencing a new subject? At what level of mastery? What is the spread of capabilities between bare-passing students vs. the top performing group? How does the intended learning specification compare to student performance at the end of a subject? In this paper we present a conceptual model that helps in answering some of these questions. It has the following main capabilities: capturing the learning specification in terms of syllabus topics and outcomes; capturing mastery levels to model progression; capturing the minimal vs. aspirational learning design; capturing confidence and reliability metrics for each of these mappings; and finally, comparing and reflecting on the learning specification against actual student performance. We present a web-based implementation of the model, and validate it by mapping the final exams from four programming subjects against the ACM/IEEE CS2013 topics and outcomes, using Bloom's Taxonomy as the mastery scale. We then import the itemised exam grades from 632 students across the four subjects and compare the demonstrated student performance against the expected learning for each of these. Key contributions of this work are the validated conceptual model for capturing and comparing expected learning vs. demonstrated performance, and a web-based implementation of this model, which is made freely available online as a community resource.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent research has proposed Neo-Piagetian theory as a useful way of describing the cognitive development of novice programmers. Neo-Piagetian theory may also be a useful way to classify materials used in learning and assessment. If Neo-Piagetian coding of learning resources is to be useful then it is important that practitioners can learn it and apply it reliably. We describe the design of an interactive web-based tutorial for Neo-Piagetian categorization of assessment tasks. We also report an evaluation of the tutorial's effectiveness, in which twenty computer science educators participated. The average classification accuracy of the participants on each of the three Neo-Piagetian stages were 85%, 71% and 78%. Participants also rated their agreement with the expert classifications, and indicated high agreement (91%, 83% and 91% across the three Neo-Piagetian stages). Self-rated confidence in applying Neo-Piagetian theory to classifying programming questions before and after the tutorial were 29% and 75% respectively. Our key contribution is the demonstration of the feasibility of the Neo-Piagetian approach to classifying assessment materials, by demonstrating that it is learnable and can be applied reliably by a group of educators. Our tutorial is freely available as a community resource.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Work integration social enterprises (WISE) seek to create employment and pathways to employment for those highly disadvantaged in the labour market. This chapter examines the effects of WISE on the wellbeing of immigrants and refugees experiencing multiple barriers to economic and social participation. Drawing on an evaluation of a programme that supports seven such enterprises in the Australian state of Victoria, the effects of involvement for individual participants and their communities are examined. The study finds that this social enterprise model affords unique local opportunities for economic and social participation for groups experiencing significant barriers to meaningful employment. These opportunities have a positive impact on individual and community-level wellbeing. However, the financial costs of the model are high relative to other employment programmes, which is consistent with international findings on intermediate labour market programmes. The productivity costs of WISE are also disproportionately high compared to private sector competitors in some industries. This raises considerable dilemmas for social enterprise operators seeking to produce social value and achieve business sustainability while bearing high productivity costs to fulfil their mission. Further, the evaluation illuminates an ongoing need to address the systemic and structural drivers of health and labour market inequalities that characterize socio-economic participation for immigrants and refugees.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Image representations derived from simplified models of the primary visual cortex (V1), such as HOG and SIFT, elicit good performance in a myriad of visual classification tasks including object recognition/detection, pedestrian detection and facial expression classification. A central question in the vision, learning and neuroscience communities regards why these architectures perform so well. In this paper, we offer a unique perspective to this question by subsuming the role of V1-inspired features directly within a linear support vector machine (SVM). We demonstrate that a specific class of such features in conjunction with a linear SVM can be reinterpreted as inducing a weighted margin on the Kronecker basis expansion of an image. This new viewpoint on the role of V1-inspired features allows us to answer fundamental questions on the uniqueness and redundancies of these features, and offer substantial improvements in terms of computational and storage efficiency.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Assessing and prioritising cost-effective strategies to mitigate the impacts of traffic incidents and accidents on non-recurrent congestion on major roads represents a significant challenge for road network managers. This research examines the influence of numerous factors associated with incidents of various types on their duration. It presents a comprehensive traffic incident data mining and analysis by developing an incident duration model based on twelve months of incident data obtained from the Australian freeway network. Parametric accelerated failure time (AFT) survival models of incident duration were developed, including log-logistic, lognormal, and Weibul-considering both fixed and random parameters, as well as a Weibull model with gamma heterogeneity. The Weibull AFT models with random parameters were appropriate for modelling incident duration arising from crashes and hazards. A Weibull model with gamma heterogeneity was most suitable for modelling incident duration of stationary vehicles. Significant variables affecting incident duration include characteristics of the incidents (severity, type, towing requirements, etc.), and location, time of day, and traffic characteristics of the incident. Moreover, the findings reveal no significant effects of infrastructure and weather on incident duration. A significant and unique contribution of this paper is that the durations of each type of incident are uniquely different and respond to different factors. The results of this study are useful for traffic incident management agencies to implement strategies to reduce incident duration, leading to reduced congestion, secondary incidents, and the associated human and economic losses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Non-fatal health outcomes from diseases and injuries are a crucial consideration in the promotion and monitoring of individual and population health. The Global Burden of Disease (GBD) studies done in 1990 and 2000 have been the only studies to quantify non-fatal health outcomes across an exhaustive set of disorders at the global and regional level. Neither effort quantified uncertainty in prevalence or years lived with disability (YLDs). Methods Of the 291 diseases and injuries in the GBD cause list, 289 cause disability. For 1160 sequelae of the 289 diseases and injuries, we undertook a systematic analysis of prevalence, incidence, remission, duration, and excess mortality. Sources included published studies, case notification, population-based cancer registries, other disease registries, antenatal clinic serosurveillance, hospital discharge data, ambulatory care data, household surveys, other surveys, and cohort studies. For most sequelae, we used a Bayesian meta-regression method, DisMod-MR, designed to address key limitations in descriptive epidemiological data, including missing data, inconsistency, and large methodological variation between data sources. For some disorders, we used natural history models, geospatial models, back-calculation models (models calculating incidence from population mortality rates and case fatality), or registration completeness models (models adjusting for incomplete registration with health-system access and other covariates). Disability weights for 220 unique health states were used to capture the severity of health loss. YLDs by cause at age, sex, country, and year levels were adjusted for comorbidity with simulation methods. We included uncertainty estimates at all stages of the analysis. Findings Global prevalence for all ages combined in 2010 across the 1160 sequelae ranged from fewer than one case per 1 million people to 350 000 cases per 1 million people. Prevalence and severity of health loss were weakly correlated (correlation coefficient −0·37). In 2010, there were 777 million YLDs from all causes, up from 583 million in 1990. The main contributors to global YLDs were mental and behavioural disorders, musculoskeletal disorders, and diabetes or endocrine diseases. The leading specific causes of YLDs were much the same in 2010 as they were in 1990: low back pain, major depressive disorder, iron-deficiency anaemia, neck pain, chronic obstructive pulmonary disease, anxiety disorders, migraine, diabetes, and falls. Age-specific prevalence of YLDs increased with age in all regions and has decreased slightly from 1990 to 2010. Regional patterns of the leading causes of YLDs were more similar compared with years of life lost due to premature mortality. Neglected tropical diseases, HIV/AIDS, tuberculosis, malaria, and anaemia were important causes of YLDs in sub-Saharan Africa. Interpretation Rates of YLDs per 100 000 people have remained largely constant over time but rise steadily with age. Population growth and ageing have increased YLD numbers and crude rates over the past two decades. Prevalences of the most common causes of YLDs, such as mental and behavioural disorders and musculoskeletal disorders, have not decreased. Health systems will need to address the needs of the rising numbers of individuals with a range of disorders that largely cause disability but not mortality. Quantification of the burden of non-fatal health outcomes will be crucial to understand how well health systems are responding to these challenges. Effective and affordable strategies to deal with this rising burden are an urgent priority for health systems in most parts of the world. Funding Bill & Melinda Gates Foundation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we propose an approach which attempts to solve the problem of surveillance event detection, assuming that we know the definition of the events. To facilitate the discussion, we first define two concepts. The event of interest refers to the event that the user requests the system to detect; and the background activities are any other events in the video corpus. This is an unsolved problem due to many factors as listed below: 1) Occlusions and clustering: The surveillance scenes which are of significant interest at locations such as airports, railway stations, shopping centers are often crowded, where occlusions and clustering of people are frequently encountered. This significantly affects the feature extraction step, and for instance, trajectories generated by object tracking algorithms are usually not robust under such a situation. 2) The requirement for real time detection: The system should process the video fast enough in both of the feature extraction and the detection step to facilitate real time operation. 3) Massive size of the training data set: Suppose there is an event that lasts for 1 minute in a video with a frame rate of 25fps, the number of frames for this events is 60X25 = 1500. If we want to have a training data set with many positive instances of the event, the video is likely to be very large in size (i.e. hundreds of thousands of frames or more). How to handle such a large data set is a problem frequently encountered in this application. 4) Difficulty in separating the event of interest from background activities: The events of interest often co-exist with a set of background activities. Temporal groundtruth typically very ambiguous, as it does not distinguish the event of interest from a wide range of co-existing background activities. However, it is not practical to annotate the locations of the events in large amounts of video data. This problem becomes more serious in the detection of multi-agent interactions, since the location of these events can often not be constrained to within a bounding box. 5) Challenges in determining the temporal boundaries of the events: An event can occur at any arbitrary time with an arbitrary duration. The temporal segmentation of events is difficult and ambiguous, and also affected by other factors such as occlusions.