898 resultados para weakly n-hyponormal operators
Resumo:
The rise of the peer economy poses complex new regulatory challenges for policy-makers. The peer economy, typified by services like Uber and AirBnB, promises substantial productivity gains through the more efficient use of existing resources and a marked reduction in regulatory overheads. These services are rapidly disrupting existing established markets, but the regulatory trade-offs they present are difficult to evaluate. In this paper, we examine the peer economy through the context of ride-sharing and the ongoing struggle over regulatory legitimacy between the taxi industry and new entrants Uber and Lyft. We first sketch the outlines of ride-sharing as a complex regulatory problem, showing how questions of efficiency are necessarily bound up in questions about levels of service, controls over pricing, and different approaches to setting, upholding, and enforcing standards. We outline the need for data-driven policy to understand the way that algorithmic systems work and what effects these might have in the medium to long term on measures of service quality, safety, labour relations, and equality. Finally, we discuss how the competition for legitimacy is not primarily being fought on utilitarian grounds, but is instead carried out within the context of a heated ideological battle between different conceptions of the role of the state and private firms as regulators. We ultimately argue that the key to understanding these regulatory challenges is to develop better conceptual models of the governance of complex systems by private actors and the available methods the state has of influencing their actions. These struggles are not, as is often thought, struggles between regulated and unregulated systems. The key to understanding these regulatory challenges is to better understand the important regulatory work carried out by powerful, centralised private firms – both the incumbents of existing markets and the disruptive network operators in the peer-economy.
Resumo:
The rise of the peer economy poses complex new regulatory challenges for policy-makers. The peer economy, typified by services like Uber and AirBnB, promises substantial productivity gains through the more efficient use of existing resources and a marked reduction in regulatory overheads. These services are rapidly disrupting existing established markets, but the regulatory trade-offs they present are difficult to evaluate. In this paper, we examine the peer economy through the context of ride-sharing and the ongoing struggle over regulatory legitimacy between the taxi industry and new entrants Uber and Lyft. We first sketch the outlines of ride-sharing as a complex regulatory problem, showing how questions of efficiency are necessarily bound up in questions about levels of service, controls over pricing, and different approaches to setting, upholding, and enforcing standards. We outline the need for data-driven policy to understand the way that algorithmic systems work and what effects these might have in the medium to long term on measures of service quality, safety, labour relations, and equality. Finally, we discuss how the competition for legitimacy is not primarily being fought on utilitarian grounds, but is instead carried out within the context of a heated ideological battle between different conceptions of the role of the state and private firms as regulators. We ultimately argue that the key to understanding these regulatory challenges is to develop better conceptual models of the governance of complex systems by private actors and the available methods the state has of influencing their actions. These struggles are not, as is often thought, struggles between regulated and unregulated systems. The key to understanding these regulatory challenges is to better understand the important regulatory work carried out by powerful, centralised private firms – both the incumbents of existing markets and the disruptive network operators in the peer-economy.
Resumo:
Over the past several years, evidence has accumulated showing that the cerebellum plays a significant role in cognitive function. Here we show, in a large genetically informative twin sample (n= 430; aged 16-30. years), that the cerebellum is strongly, and reliably (n=30 rescans), activated during an n-back working memory task, particularly lobules I-IV, VIIa Crus I and II, IX and the vermis. Monozygotic twin correlations for cerebellar activation were generally much larger than dizygotic twin correlations, consistent with genetic influences. Structural equation models showed that up to 65% of the variance in cerebellar activation during working memory is genetic (averaging 34% across significant voxels), most prominently in the lobules VI, and VIIa Crus I, with the remaining variance explained by unique/unshared environmental factors. Heritability estimates for brain activation in the cerebellum agree with those found for working memory activation in the cerebral cortex, even though cerebellar cyto-architecture differs substantially. Phenotypic correlations between BOLD percent signal change in cerebrum and cerebellum were low, and bivariate modeling indicated that genetic influences on the cerebellum are at least partly specific to the cerebellum. Activation on the voxel-level correlated very weakly with cerebellar gray matter volume, suggesting specific genetic influences on the BOLD signal. Heritable signals identified here should facilitate discovery of genetic polymorphisms influencing cerebellar function through genome-wide association studies, to elucidate the genetic liability to brain disorders affecting the cerebellum.
Resumo:
For the first decade of its existence, the concept of citizen journalism has described an approach which was seen as a broadening of the participant base in journalistic processes, but still involved only a comparatively small subset of overall society – for the most part, citizen journalists were news enthusiasts and “political junkies” (Coleman, 2006) who, as some exasperated professional journalists put it, “wouldn’t get a job at a real newspaper” (The Australian, 2007), but nonetheless followed many of the same journalistic principles. The investment – if not of money, then at least of time and effort – involved in setting up a blog or participating in a citizen journalism Website remained substantial enough to prevent the majority of Internet users from engaging in citizen journalist activities to any significant extent; what emerged in the form of news blogs and citizen journalism sites was a new online elite which for some time challenged the hegemony of the existing journalistic elite, but gradually also merged with it. The mass adoption of next-generation social media platforms such as Facebook and Twitter, however, has led to the emergence of a new wave of quasi-journalistic user activities which now much more closely resemble the “random acts of journalism” which JD Lasica envisaged in 2003. Social media are not exclusively or even predominantly used for citizen journalism; instead, citizen journalism is now simply a by-product of user communities engaging in exchanges about the topics which interest them, or tracking emerging stories and events as they happen. Such platforms – and especially Twitter with its system of ad hoc hashtags that enable the rapid exchange of information about issues of interest – provide spaces for users to come together to “work the story” through a process of collaborative gatewatching (Bruns, 2005), content curation, and information evaluation which takes place in real time and brings together everyday users, domain experts, journalists, and potentially even the subjects of the story themselves. Compared to the spaces of news blogs and citizen journalism sites, but also of conventional online news Websites, which are controlled by their respective operators and inherently position user engagement as a secondary activity to content publication, these social media spaces are centred around user interaction, providing a third-party space in which everyday as well as institutional users, laypeople as well as experts converge without being able to control the exchange. Drawing on a number of recent examples, this article will argue that this results in a new dynamic of interaction and enables the emergence of a more broadly-based, decentralised, second wave of citizen engagement in journalistic processes.
Resumo:
We demonstrate a geometrically inspired technique for computing Evans functions for the linearised operators about travelling waves. Using the examples of the F-KPP equation and a Keller–Segel model of bacterial chemotaxis, we produce an Evans function which is computable through several orders of magnitude in the spectral parameter and show how such a function can naturally be extended into the continuous spectrum. In both examples, we use this function to numerically verify the absence of eigenvalues in a large region of the right half of the spectral plane. We also include a new proof of spectral stability in the appropriate weighted space of travelling waves of speed c≥sqrt(2δ) in the F-KPP equation.
Resumo:
Pollution on electrical insulators is one of the greatest causes of failure of substations subjected to high levels of salinity and environmental pollution. Considering leakage current as the main indicator of pollution on insulators, this paper focus on establishing the effect of the environmental conditions on the risk of failure due to pollution on insulators and determining the significant change in the magnitude of the pollution on the insulators during dry and humid periods. Hierarchical segmentation analysis was used to establish the effect of environmental conditions on the risk of failure due to pollution on insulators. The Kruskal-Wallis test was utilized to determine the significant changes in the magnitude of the pollution due to climate periods. An important result was the discovery that leakage current was more common on insulators during dry periods than humid ones. There was also a higher risk of failure due to pollution during dry periods. During the humid period, various temperatures and wind directions produced a small change in the risk of failure. As a technical result, operators of electrical substations can now identify the cause of an increase in risk of failure due to pollution in the area. The research provides a contribution towards the behaviour of the leakage current under conditions similar to those of the Colombian Caribbean coast and how they affect the risk of failure of the substation due to pollution.
Resumo:
How can obstacles to innovation be overcome in road construction? Using a focus group methodology, and based on two prior rounds of empirical work, the analysis in this chapter generates a set of four key solutions to two main construction innovation obstacles: (1) restrictive tender assessment and (2) disagreement over who carries the risk of new product failure. The four key solutions uncovered were: 1) pre-project product certification; 2) past innovation performance assessment; 3) earlier involvement of product suppliers and road asset operators; and 4) performance-based specifications. Additional research is suggested in order to illicit deeper insights into possible solutions to construction innovation obstacles, and should emphasise furthering the theoretical interpretation of empirical phenomena.
Resumo:
In recent years a significant amount of research has been undertaken in collision avoidance and personnel location technology in order to reduce the number of incidents involving pedestrians and mobile plant equipment which are a high risk in underground coal mines. Improving the visibility of pedestrians to drivers would potentially reduce the likelihood of these incidents. In the road safety context, a variety of approaches have been used to make pedestrians more conspicuous to drivers at night (including vehicle and roadway lighting technologies and night vision enhancement systems). However, emerging research from our group and others has demonstrated that clothing incorporating retroreflective markers on the movable joints as well as the torso can provide highly significant improvements in pedestrian visibility in reduced illumination. Importantly, retroreflective markers are most effective when positioned on the moveable joints creating a sensation of “biological motion”. Based only on the motion of points on the moveable joints of an otherwise invisible body, observers can quickly recognize a walking human form, and even correctly judge characteristics such as gender and weight. An important and as yet unexplored question is whether the benefits of these retroreflective clothing configurations translate to the context of mining where workers are operating under low light conditions. Given that the benefits of biomotion clothing are effective for both young and older drivers, as well as those with various eye conditions common in those >50 years reinforces their potential application in the mining industry which employs many workers in this age bracket. This paper will summarise the visibility benefits of retroreflective markers in a biomotion configuration for the mining industry, highlighting that this form of clothing has the potential to be an affordable and convenient way to provide a sizeable safety benefit. It does not involve modifications to vehicles, drivers, or infrastructure. Instead, adding biomotion markings to standard retroreflective vests can enhance the night-time conspicuity of mining workers by capitalising on perceptual capabilities that have already been well documented.
Resumo:
More than 14 million Dish Network subscribers have been without Breaking Bad, Mad Men, and The Walking Dead since June when the satellite provider pulled AMC Networks—AMC, Sundance, IFC, and WE tv—from its lineup in a dispute over carriage fees. The tactic is called a blackout, and it’s becoming increasingly common in the television landscape as pay-TV operators and station owners battle over the nearly $5 billion at stake in the next 5 years.
Resumo:
Issue addressed The paper examines the meanings of food safety among food businesses deemed non-compliant and considers the need for an ‘insider perspective’ to inform a more nuanced health promotion practice. Methods In-depth interviews were conducted with 29 food business operators who had been recently deemed ‘non-compliant’ through Council inspection. Result Paradoxically, these ‘non-compliers’ revealed a strong belief in the importance of food safety as well as a desire to comply with the regulations as communicated to them by Environmental Health Officers (EHOs). Conclusions The evidence base of food safety is largely informed by the science of food hazards, yet there is a very important need to illuminate the ‘insider’ experience of food businesses doing food safety on a daily basis. This requires a more socially nuanced appreciation of food businesses beyond the simple dichotomy of compliant/ non-compliant. So what? Armed with a deeper understanding of the social context surrounding food safety practice, it is anticipated that a more balanced, collaborative mode of food safety health promotion could develop which could add to the current signature model of regulation.
Resumo:
The numerical solution of fractional partial differential equations poses significant computational challenges in regard to efficiency as a result of the spatial nonlocality of the fractional differential operators. The dense coefficient matrices that arise from spatial discretisation of these operators mean that even one-dimensional problems can be difficult to solve using standard methods on grids comprising thousands of nodes or more. In this work we address this issue of efficiency for one-dimensional, nonlinear space-fractional reaction–diffusion equations with fractional Laplacian operators. We apply variable-order, variable-stepsize backward differentiation formulas in a Jacobian-free Newton–Krylov framework to advance the solution in time. A key advantage of this approach is the elimination of any requirement to form the dense matrix representation of the fractional Laplacian operator. We show how a banded approximation to this matrix, which can be formed and factorised efficiently, can be used as part of an effective preconditioner that accelerates convergence of the Krylov subspace iterative solver. Our approach also captures the full contribution from the nonlinear reaction term in the preconditioner, which is crucial for problems that exhibit stiff reactions. Numerical examples are presented to illustrate the overall effectiveness of the solver.
Resumo:
Objective. To analyze the effect of HLA-DR genes on susceptibility to and severity of ankylosing spondylitis (AS). Methods. Three hundred sixty- three white British AS patients were studied; 149 were carefully assessed for a range of clinical manifestations, and disease severity was assessed using a structured questionnaire. Limited HLA class I typing and complete HLA-DR typing were performed using DNA-based methods. HLA data from 13,634 healthy white British bone marrow donors were used for comparison. Results. A significant association between DR1 and AS was found, independent of HLA-B27 (overall odds ratio [OR] 1.4, 95% confidence interval [95% CI] 1.1-1.8, P = 0.02; relative risk [RR] 2.7, 95% CI 1.5-4.8, P = 6 x 10-4 among homozygotes; RR 2.1, 95% CI 1.5-2.8, P = 5 x 10-6 among heterozygotes). A large but weakly significant association between DR8 and AS was noted, particularly among DR8 homozygotes (RR 6.8, 95% CI 1.6-29.2, P = 0.01 among homozygotes; RR 1.6, 95% CI 1.0-2.7, P = 0.07 among heterozygotes). A negative association with DR12 (OR 0.22, 95% CI 0.09-0.5, P = 0.001) was noted. HLA-DR7 was associated with younger age at onset of disease (mean age at onset 18 years for DR7-positive patients and 23 years for DR7-negative patients; Z score 3.21, P = 0.001). No other HLA class I or class H associations with disease severity or with different clinical manifestations of AS were found. Conclusion. The results of this study suggest that HLA-DR genes may have a weak effect on susceptibility to AS independent of HLA-B27, but do not support suggestions that they affect disease severity or different clinical manifestations.
Resumo:
Background: Perennial Ryegrass is a major cause of rhinitis in spring and early summer. Bahia grass, Paspalum notatum, flowers late into summer and could account for allergic rhinitis at this time. We determined the frequency of serum immunoglobulin (Ig)E reactivity with Bahia grass in Ryegrass pollen allergic patients and investigated IgE cross-reactivity between Bahia and Ryegrass. Methods: Serum from 33 Ryegrass pollen allergic patients and 12 nonatopic donors were tested for IgE reactivity with Bahia and Ryegrass pollen extracts (PE) by enzyme-linked immunosorbent assay (ELISA), western blotting and inhibition ELISA. Allergen-specific antibodies from a pool of sera from allergic donors were affinity purified and tested for IgE cross-reactivity. Results: Seventy-eight per cent of the sera had IgE reactivity with Bahia grass, but more weakly than with Ryegrass. Antibodies eluted from the major Ryegrass pollen allergens, Lol p 1 and Lol p 5, showed IgE reactivity with allergens of Ryegrass and Canary but not Bahia or Bermuda grasses. Timothy, Canary and Ryegrass inhibited IgE reactivity with Ryegrass and Bahia grass, whereas Bahia, Johnson and Bermuda grass did not inhibit IgE reactivity with Ryegrass. Conclusions: The majority of Ryegrass allergic patients also showed serum IgE reactivity with Bahia grass PE. However, Bahia grass and Ryegrass had only limited IgE cross-reactivity indicating that Bahia grass should be considered in diagnosis and treatment of patients with hay fever late in' the grass pollen season.
Resumo:
BACKGROUND: Coal mining is of significant economic importance to the Australian economy. Despite this fact, the related workforce is subjected to a number of psychosocial risks and musculoskeletal injury, and various psychological disorders are common among this population group. Because only limited research has been conducted in this population group, we sought to examine the relationship between physical (pain) and psychological (distress) factors, as well as the effects of various demographic, lifestyle, and fatigue indicators on this relationship. METHODS: Coal miners (N = 231) participated in a survey of musculoskeletal pain and distress on-site during their work shifts. Participants also provided demographic information (job type, age, experience in the industry, and body mass index) and responded to questions about exercise and sleep quality (on- and off-shift) as well as physical and mental tiredness after work. RESULTS: A total of 177 workers (80.5%) reported experiencing pain in at least one region of their body. The majority of the sample population (61.9%) was classified as having low-level distress, 28.4% had scores indicating mild to moderate distress, and 9.6% had scores indicating high levels of distress. Both number of pain regions and job type (being an operator) significantly predicted distress. Higher distress score was also associated with greater absenteeism in workers who reported lower back pain. In addition, perceived sleep quality during work periods partially mediated the relationship between pain and distress. CONCLUSION: The study findings support the existence of widespread musculoskeletal pain among the coal-mining workforce, and this pain is associated with increased psychological distress. Operators (truck drivers) and workers reporting poor sleep quality during work periods are most likely to report increased distress, which highlights the importance of supporting the mining workforce for sustained productivity.
Resumo:
Objective: The aim of this study was to develop a model capable of predicting variability in the mental workload experienced by frontline operators under routine and nonroutine conditions. Background: Excess workload is a risk that needs to be managed in safety-critical industries. Predictive models are needed to manage this risk effectively yet are difficult to develop. Much of the difficulty stems from the fact that workload prediction is a multilevel problem. Method: A multilevel workload model was developed in Study 1 with data collected from an en route air traffic management center. Dynamic density metrics were used to predict variability in workload within and between work units while controlling for variability among raters. The model was cross-validated in Studies 2 and 3 with the use of a high-fidelity simulator. Results: Reported workload generally remained within the bounds of the 90% prediction interval in Studies 2 and 3. Workload crossed the upper bound of the prediction interval only under nonroutine conditions. Qualitative analyses suggest that nonroutine events caused workload to cross the upper bound of the prediction interval because the controllers could not manage their workload strategically. Conclusion: The model performed well under both routine and nonroutine conditions and over different patterns of workload variation. Application: Workload prediction models can be used to support both strategic and tactical workload management. Strategic uses include the analysis of historical and projected workflows and the assessment of staffing needs. Tactical uses include the dynamic reallocation of resources to meet changes in demand.