266 resultados para Call Blocking


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The conventional manual power line corridor inspection processes that are used by most energy utilities are labor-intensive, time consuming and expensive. Remote sensing technologies represent an attractive and cost-effective alternative approach to these monitoring activities. This paper presents a comprehensive investigation into automated remote sensing based power line corridor monitoring, focusing on recent innovations in the area of increased automation of fixed-wing platforms for aerial data collection, and automated data processing for object recognition using a feature fusion process. Airborne automation is achieved by using a novel approach that provides improved lateral control for tracking corridors and automatic real-time dynamic turning for flying between corridor segments, we call this approach PTAGS. Improved object recognition is achieved by fusing information from multi-sensor (LiDAR and imagery) data and multiple visual feature descriptors (color and texture). The results from our experiments and field survey illustrate the effectiveness of the proposed aircraft control and feature fusion approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The availability and use of online counseling approaches has increased rapidly over the last decade. While research has suggested a range of potential affordances and limitations of online counseling modalities, very few studies have offered detailed examinations of how counselors and clients manage asynchronous email counseling exchanges. In this paper we examine email exchanges involving clients and counselors through Kids Helpline, a national Australian counseling service that offers free online, email and telephone counseling for young people up to the age of 25. We employ tools from the traditions of ethnomethodology and conversation analysis to analyze the ways in which counselors from Kids Helpline request that their clients call them, and hence change the modality of their counseling relationship, from email to telephone counseling. This paper shows the counselors’ three multi-layered approaches in these emails as they negotiate the potentially delicate task of requesting and persuading a client to change the trajectory of their counseling relationship from text to talk without placing that relationship in jeopardy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The concept of Six Sigma was initiated in the 1980s by Motorola. Since then it has been implemented in several manufacturing and service organizations. In case of services, health care and finance were major beneficiaries till now. The application of Six Sigma is gradually picking up in other services like; call centers, utilities and public services. This paper provides empirical evidence on Six Sigma implementation in service industries in Singapore. By using a sample size of 50 service organizations (10 responses are from organizations which have implemented Six Sigma), the paper helps in understanding the status of Six Sigma in service organizations in Singapore. The findings confirm the inclusion of critical success factors, critical to quality characteristics, tools and key performance indicators as observed from the literature. The revelation of “not relevant” as a reason for not implementing Six Sigma shows the need for understanding specific requirements of service organizations before its application.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background and aim Falls are the leading cause of injury in older adults. Identifying people at risk before they experience a serious fall requiring hospitalisation allows an opportunity to intervene earlier and potentially reduce further falls and subsequent healthcare costs. The purpose of this project was to develop a referral pathway to a community falls-prevention team for older people who had experienced a fall attended by a paramedic service and who were not transported to hospital. It was also hypothesised that providing intervention to this group of clients would reduce future falls-related ambulance call-outs, emergency department presentations and hospital admissions. Methods An education package, referral pathway and follow-up procedures were developed. Both services had regular meetings, and work shadowing with the paramedics was also trialled to encourage more referrals. A range of demographic and other outcome measures were collected to compare people referred through the paramedic pathway and through traditional pathways. Results Internal data from the Queensland Ambulance Service indicated that there were approximately six falls per week by community-dwelling older persons in the eligible service catchment area (south west Brisbane metropolitan area) who were attended to by Queensland Ambulance Service paramedics, but not transported to hospital during the 2-year study period (2008–2009). Of the potential 638 eligible patients, only 17 (2.6%) were referred for a falls assessment. Conclusion Although this pilot programme had support from all levels of management as well as from the service providers, it did not translate into actual referrals. Several explanations are provided for these preliminary findings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data flow analysis techniques can be used to help assess threats to data confidentiality and integrity in security critical program code. However, a fundamental weakness of static analysis techniques is that they overestimate the ways in which data may propagate at run time. Discounting large numbers of these false-positive data flow paths wastes an information security evaluator's time and effort. Here we show how to automatically eliminate some false-positive data flow paths by precisely modelling how classified data is blocked by certain expressions in embedded C code. We present a library of detailed data flow models of individual expression elements and an algorithm for introducing these components into conventional data flow graphs. The resulting models can be used to accurately trace byte-level or even bit-level data flow through expressions that are normally treated as atomic. This allows us to identify expressions that safely downgrade their classified inputs and thereby eliminate false-positive data flow paths from the security evaluation process. To validate the approach we have implemented and tested it in an existing data flow analysis toolkit.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is unprecedented worldwide demand for mathematical solutions to complex problems. That demand has generated a further call to update mathematics education in a way that develops students’ abilities to deal with complex systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

From one view of composition—let us call it the inspired or “Mozartian” view—musical compositions arrive fully formed in the mind of the composer and simply require transcription. In reality, however, it seems that very few people are so inspired, and composition is often more akin to a gradual clarification and refinement of partially formed ideas on the musical landscape. Particular landmarks in the compositional landscape tend to become clear before others, such that the incomplete piece is a patchwork of disconnected musical islands. An interactive evolutionary morphing system may provide some assistance for composers, to help build bridges between musical islands by generating hybrid musical transitions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Although the potential to reduce hospitalisation and mortality in chronic heart failure (CHF) is well reported, the feasibility of receiving healthcare by structured telephone support or telemonitoring is not. Aims: To determine; adherence, adaptation and acceptability to a national nurse-coordinated telephone-monitoring CHF management strategy. The Chronic Heart Failure Assistance by Telephone Study (CHAT). Methods: Triangulation of descriptive statistics, feedback surveys and qualitative analysis of clinical notes. Cohort comprised of standard care plus intervention (SC + I) participants who completed the first year of the study. Results: 30 GPs (70% rural) randomised to SC + I recruited 79 eligible participants, of whom 60 (76%) completed the full 12 month follow-up period. During this time 3619 calls were made into the CHAT system (mean 45.81 SD ± 79.26, range 0-369), Overall there was an adherence to the study protocol of 65.8% (95% CI 0.54-0.75; p = 0.001) however, of the 60 participants who completed the 12 month follow-up period the adherence was significantly higher at 92.3% (95% CI 0.82-0.97, p ≤ 0.001). Only 3% of this elderly group (mean age 74.7 ±9.3 years) were unable to learn or competently use the technology. Participants rated CHAT with a total acceptability rate of 76.45%. Conclusion: This study shows that elderly CHF patients can adapt quickly, find telephone-monitoring an acceptable part of their healthcare routine, and are able to maintain good adherence for a least 12 months. © 2007.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The middle classes form the bulk of Indian migrants who head for Australian shores today. Yet, within Australia, general knowledge of the conditions that drive Indians’ determined search for opportunities overseas is limited to the few who have contact with international students and migrants from the sub-continent, and the skewed, melodramatic antics of Bollywood. It is my suggestion that a broader understanding of the underlying reasons that push Indians to migrate to societies like Australia can be had through readings of Chetan’s Bhagat’s four hugely popular novels: Five Point Someone, One night @the Call Center, The 3 mistakes of My life and Two States. Bhagat is a graduate of India’s famed Indian Institute of Technology and a former Non-Resident Indian investment banker who has since returned to live in Delhi. His experiences make him the perfect mouthpiece for middle India and his paperbacks depict that stratum of Indian society’s obsessions with social mobility, marriage, regional and religious divides with great sympathy and conviction. Drawing on observations made during a recent visit to India, I illustrate what an exploration of Bhagat’s paperbacks reveals about everyday, contemporary India and what it adds to Australian understandings of Indians and India today.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper explores how the design of creative clusters as a key strategy in promoting the urban creative economy has played out in Shanghai. Creative Clusters in Europe and North America context have emerged ‘organically’. They developed spontaneously in those cities which went through a period of post-industrial decline. Creative Industries grew up in these cities as part of a new urban economy in the wake of old manufacturing industries. Artists and creative entrepreneurs moved into vacant warehouses and factories and began the trend of ‘creative clusters’. Such clusters facilitate the transfer of tacit knowledge through informal learning, the efficient sourcing of skills and information, competition, collaboration and learning, inter-cluster trading and networking. This new urban phenomenon was soon targeted by local economic development policy in charge of re-generating and re-structuralizing industrial activities in cities. Rising interest from real estate and local economic development has led to more and more planned creative clusters. In the aim of catching up with the world’s creative cities, Shanghai has planned over 100 creative clusters since 2005. Along with these officially designed creative clusters, there are organically emerged creative clusters that are much smaller in scale and much more informal in terms of the management. And they emerged originally in old residential areas just outside the CBD and expand to include French concession the most sort after residential area at the edge of CBD. More recently, office buildings within CBD are made available for creative usages. From fringe to CBD, these organic creative clusters provide crucial evidences for the design of creative clusters. This paper will be organized in 2 parts. In the first part, I will present a case study of 8 ‘official’ clusters (title granted by local govenrment) in Shanghai through which I am hoping to develop some key indicators of the success/failure of creative clusters as well as link them with their physical, social and operational efficacies. In the second part, a variety of ‘alternative’ clusters (organicly formed clusters most of which are not recongnized by the government) supplies with us the possibilities of rethinking the so-called ‘cluster development strategy’ in terms of what kind of spaces are appropriate for use by clusters? Who should manage them and in what format? And ultimately what are their relationship with the rest of the city should be defined?

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Proteases regulate a spectrum of diverse physiological processes, and dysregulation of proteolytic activity drives a plethora of pathological conditions. Understanding protease function is essential to appreciating many aspects of normal physiology and progression of disease. Consequently, development of potent and specific inhibitors of proteolytic enzymes is vital to provide tools for the dissection of protease function in biological systems and for the treatment of diseases linked to aberrant proteolytic activity. The studies in this thesis describe the rational design of potent inhibitors of three proteases that are implicated in disease development. Additionally, key features of the interaction of proteases and their cognate inhibitors or substrates are analysed and a series of rational inhibitor design principles are expounded and tested. Rational design of protease inhibitors relies on a comprehensive understanding of protease structure and biochemistry. Analysis of known protease cleavage sites in proteins and peptides is a commonly used source of such information. However, model peptide substrate and protein sequences have widely differing levels of backbone constraint and hence can adopt highly divergent structures when binding to a protease’s active site. This may result in identical sequences in peptides and proteins having different conformations and diverse spatial distribution of amino acid functionalities. Regardless of this, protein and peptide cleavage sites are often regarded as being equivalent. One of the key findings in the following studies is a definitive demonstration of the lack of equivalence between these two classes of substrate and invalidation of the common practice of using the sequences of model peptide substrates to predict cleavage of proteins in vivo. Another important feature for protease substrate recognition is subsite cooperativity. This type of cooperativity is commonly referred to as protease or substrate binding subsite cooperativity and is distinct from allosteric cooperativity, where binding of a molecule distant from the protease active site affects the binding affinity of a substrate. Subsite cooperativity may be intramolecular where neighbouring residues in substrates are interacting, affecting the scissile bond’s susceptibility to protease cleavage. Subsite cooperativity can also be intermolecular where a particular residue’s contribution to binding affinity changes depending on the identity of neighbouring amino acids. Although numerous studies have identified subsite cooperativity effects, these findings are frequently ignored in investigations probing subsite selectivity by screening against diverse combinatorial libraries of peptides (positional scanning synthetic combinatorial library; PS-SCL). This strategy for determining cleavage specificity relies on the averaged rates of hydrolysis for an uncharacterised ensemble of peptide sequences, as opposed to the defined rate of hydrolysis of a known specific substrate. Further, since PS-SCL screens probe the preference of the various protease subsites independently, this method is inherently unable to detect subsite cooperativity. However, mean hydrolysis rates from PS-SCL screens are often interpreted as being comparable to those produced by single peptide cleavages. Before this study no large systematic evaluation had been made to determine the level of correlation between protease selectivity as predicted by screening against a library of combinatorial peptides and cleavage of individual peptides. This subject is specifically explored in the studies described here. In order to establish whether PS-SCL screens could accurately determine the substrate preferences of proteases, a systematic comparison of data from PS-SCLs with libraries containing individually synthesised peptides (sparse matrix library; SML) was carried out. These SML libraries were designed to include all possible sequence combinations of the residues that were suggested to be preferred by a protease using the PS-SCL method. SML screening against the three serine proteases kallikrein 4 (KLK4), kallikrein 14 (KLK14) and plasmin revealed highly preferred peptide substrates that could not have been deduced by PS-SCL screening alone. Comparing protease subsite preference profiles from screens of the two types of peptide libraries showed that the most preferred substrates were not detected by PS SCL screening as a consequence of intermolecular cooperativity being negated by the very nature of PS SCL screening. Sequences that are highly favoured as result of intermolecular cooperativity achieve optimal protease subsite occupancy, and thereby interact with very specific determinants of the protease. Identifying these substrate sequences is important since they may be used to produce potent and selective inhibitors of protolytic enzymes. This study found that highly favoured substrate sequences that relied on intermolecular cooperativity allowed for the production of potent inhibitors of KLK4, KLK14 and plasmin. Peptide aldehydes based on preferred plasmin sequences produced high affinity transition state analogue inhibitors for this protease. The most potent of these maintained specificity over plasma kallikrein (known to have a very similar substrate preference to plasmin). Furthermore, the efficiency of this inhibitor in blocking fibrinolysis in vitro was comparable to aprotinin, which previously saw clinical use to reduce perioperative bleeding. One substrate sequence particularly favoured by KLK4 was substituted into the 14 amino acid, circular sunflower trypsin inhibitor (SFTI). This resulted in a highly potent and selective inhibitor (SFTI-FCQR) which attenuated protease activated receptor signalling by KLK4 in vitro. Moreover, SFTI-FCQR and paclitaxel synergistically reduced growth of ovarian cancer cells in vitro, making this inhibitor a lead compound for further therapeutic development. Similar incorporation of a preferred KLK14 amino acid sequence into the SFTI scaffold produced a potent inhibitor for this protease. However, the conformationally constrained SFTI backbone enforced a different intramolecular cooperativity, which masked a KLK14 specific determinant. As a consequence, the level of selectivity achievable was lower than that found for the KLK4 inhibitor. Standard mechanism inhibitors such as SFTI rely on a stable acyl-enzyme intermediate for high affinity binding. This is achieved by a conformationally constrained canonical binding loop that allows for reformation of the scissile peptide bond after cleavage. Amino acid substitutions within the inhibitor to target a particular protease may compromise structural determinants that support the rigidity of the binding loop and thereby prevent the engineered inhibitor reaching its full potential. An in silico analysis was carried out to examine the potential for further improvements to the potency and selectivity of the SFTI-based KLK4 and KLK14 inhibitors. Molecular dynamics simulations suggested that the substitutions within SFTI required to target KLK4 and KLK14 had compromised the intramolecular hydrogen bond network of the inhibitor and caused a concomitant loss of binding loop stability. Furthermore in silico amino acid substitution revealed a consistent correlation between a higher frequency of formation and the number of internal hydrogen bonds of SFTI-variants and lower inhibition constants. These predictions allowed for the production of second generation inhibitors with enhanced binding affinity toward both targets and highlight the importance of considering intramolecular cooperativity effects when engineering proteins or circular peptides to target proteases. The findings from this study show that although PS-SCLs are a useful tool for high throughput screening of approximate protease preference, later refinement by SML screening is needed to reveal optimal subsite occupancy due to cooperativity in substrate recognition. This investigation has also demonstrated the importance of maintaining structural determinants of backbone constraint and conformation when engineering standard mechanism inhibitors for new targets. Combined these results show that backbone conformation and amino acid cooperativity have more prominent roles than previously appreciated in determining substrate/inhibitor specificity and binding affinity. The three key inhibitors designed during this investigation are now being developed as lead compounds for cancer chemotherapy, control of fibrinolysis and cosmeceutical applications. These compounds form the basis of a portfolio of intellectual property which will be further developed in the coming years.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bioinformatics involves analyses of biological data such as DNA sequences, microarrays and protein-protein interaction (PPI) networks. Its two main objectives are the identification of genes or proteins and the prediction of their functions. Biological data often contain uncertain and imprecise information. Fuzzy theory provides useful tools to deal with this type of information, hence has played an important role in analyses of biological data. In this thesis, we aim to develop some new fuzzy techniques and apply them on DNA microarrays and PPI networks. We will focus on three problems: (1) clustering of microarrays; (2) identification of disease-associated genes in microarrays; and (3) identification of protein complexes in PPI networks. The first part of the thesis aims to detect, by the fuzzy C-means (FCM) method, clustering structures in DNA microarrays corrupted by noise. Because of the presence of noise, some clustering structures found in random data may not have any biological significance. In this part, we propose to combine the FCM with the empirical mode decomposition (EMD) for clustering microarray data. The purpose of EMD is to reduce, preferably to remove, the effect of noise, resulting in what is known as denoised data. We call this method the fuzzy C-means method with empirical mode decomposition (FCM-EMD). We applied this method on yeast and serum microarrays, and the silhouette values are used for assessment of the quality of clustering. The results indicate that the clustering structures of denoised data are more reasonable, implying that genes have tighter association with their clusters. Furthermore we found that the estimation of the fuzzy parameter m, which is a difficult step, can be avoided to some extent by analysing denoised microarray data. The second part aims to identify disease-associated genes from DNA microarray data which are generated under different conditions, e.g., patients and normal people. We developed a type-2 fuzzy membership (FM) function for identification of diseaseassociated genes. This approach is applied to diabetes and lung cancer data, and a comparison with the original FM test was carried out. Among the ten best-ranked genes of diabetes identified by the type-2 FM test, seven genes have been confirmed as diabetes-associated genes according to gene description information in Gene Bank and the published literature. An additional gene is further identified. Among the ten best-ranked genes identified in lung cancer data, seven are confirmed that they are associated with lung cancer or its treatment. The type-2 FM-d values are significantly different, which makes the identifications more convincing than the original FM test. The third part of the thesis aims to identify protein complexes in large interaction networks. Identification of protein complexes is crucial to understand the principles of cellular organisation and to predict protein functions. In this part, we proposed a novel method which combines the fuzzy clustering method and interaction probability to identify the overlapping and non-overlapping community structures in PPI networks, then to detect protein complexes in these sub-networks. Our method is based on both the fuzzy relation model and the graph model. We applied the method on several PPI networks and compared with a popular protein complex identification method, the clique percolation method. For the same data, we detected more protein complexes. We also applied our method on two social networks. The results showed our method works well for detecting sub-networks and give a reasonable understanding of these communities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Public concern about the crime of human trafficking has dramatically risen over the last two decades. . This concern and panic has both spawned and been fuelled by an array of public awareness campaigns that aim to educate the public about this crime. Campaigns such as the Blue Blindfold Campaign in the UK, the UN-driven Blue Heart Campaign, and the worldwide Body Shop campaign have contributed to the public’s awareness and, to an extent, understanding of the phenomenon of human trafficking. This research explores these and other government and non-government campaigns aimed at raising public awareness of human trafficking. It questions the rationale, call to action and impact of these efforts, and analyses the depiction of trafficking victims in these campaigns. In particular, this research argues that some of these campaigns perpetuate an understanding of a hierarchy of victimisation of trafficking. A public focus on sex trafficking often results in the conflation of prostitution and trafficking, and renders invisible the male and female victims of trafficking for other forms of labour.