244 resultados para Set-Valued Functions
Resumo:
The question posed in this chapter is: To what extent does current education theory and practice prepare graduates for the creative economy? We first define what we mean by the term creative economy, explain why we think it is a significant point of focus, derive its key features, describe the human capital requirements of these features, and then discuss whether current education theory and practice are producing these human capital requirements. The term creative economy can be critiqued as a shibboleth, but as a high level metaphor, it nevertheless has value in directing us away from certain sorts of economic activity and toward other kinds. Much economic activity is in no way creative. If I have a monopoly on some valued resource, I do not need to be creative. Other forms of economic activity are intensely creative. If I have no valued resources, I must create something that is valued. At its simplest and yet most profound, the idea of a creative economy suggests a capacity to compete based on engaging in a gainful activity that is different from everyone else’s, rather than pursuing the same endeavor more competitively than everyone else. The ability to differentiate on novelty is key to the concept of creative economy and key to our analysis of education for this economy. Therefore, we follow Potts and Cunningham (2008, p. 18) and Potts, Cunningham, Hartley, and Ormerod (2008) in their discussion of the economic significance of the creative industries and see the creative economy not as a sector but as a set of economic processes that act on the economy as a whole to invigorate innovation based growth. We see the creative economy as suffused with all industry rather than as a sector in its own right. These economic processes are essentially concerned with the production of new ideas that ultimately become new products, service, industry sectors, or, in some cases, process or product innovations in older sectors. Therefore, our starting point is that modern economies depend on innovation, and we see the core of innovation as new knowledge of some kind. We commence with some observations about innovation.
Resumo:
Background: There is a growing trend for individuals to seek health information from online sources. Alcohol and other drug (AOD) use is a significant health problem worldwide, but access and use of AOD websites is poorly understood. ----- ----- Objective: To investigate content and functionality preferences for AOD and other health websites. Methods: An anonymous online survey examined general Internet and AOD-specific usage and search behaviors, valued features of AOD and health-related websites (general and interactive website features), indicators of website trustworthiness, valued AOD website tools or functions, and treatment modality preferences. ----- ----- Results: Surveys were obtained from 1214 drug (n = 766) and alcohol website users (n = 448) (mean age 26.2 years, range 16-70). There were no significant differences between alcohol and drug groups on demographic variables, Internet usage, indicators of website trustworthiness, or on preferences for AOD website functionality. A robust website design/navigation, open access, and validated content provision were highly valued by both groups. While attractiveness and pictures or graphics were also valued, high-cost features (videos, animations, games) were minority preferences. Almost half of respondents in both groups were unable to readily access the information they sought. Alcohol website users placed greater importance on several AOD website tools and functions than did those accessing other drug websites: online screening tools (χ²2 = 15.8, P < .001, n = 985); prevention programs (χ²2 = 27.5, P < .001, n = 981); tracking functions (χ²2 = 11.5, P = .003, n = 983); self help treatment programs (χ²2 = 8.3, P = .02, n = 984); downloadable fact sheets for friends (χ²2 = 11.6, P = .003, n = 981); or family (χ²2 = 12.7, P = .002, n = 983). The most preferred online treatment option for both the user groups was an Internet site with email therapist support. Explorations of demographic differences were also performed. While gender did not affect survey responses, younger respondents were more likely to value interactive and social networking features, whereas downloading of credible information was most highly valued by older respondents. ----- ----- Conclusions: Significant deficiencies in the provision of accessible information on AOD websites were identified, an important problem since information seeking was the most common reason for accessing these websites, and, therefore, may be a key avenue for engaging website users in behaviour change. The few differences between AOD website users suggested that both types of websites may have similar features, although alcohol website users may more readily be engaged in screening, prevention and self-help programs, tracking change, and may value fact sheets more highly. While the sociodemographic differences require replication and clarification, these differences support the notion that the design and features of AOD websites should target specific audiences to have maximal impact.
Resumo:
This paper presents a framework for performing real-time recursive estimation of landmarks’ visual appearance. Imaging data in its original high dimensional space is probabilistically mapped to a compressed low dimensional space through the definition of likelihood functions. The likelihoods are subsequently fused with prior information using a Bayesian update. This process produces a probabilistic estimate of the low dimensional representation of the landmark visual appearance. The overall filtering provides information complementary to the conventional position estimates which is used to enhance data association. In addition to robotics observations, the filter integrates human observations in the appearance estimates. The appearance tracks as computed by the filter allow landmark classification. The set of labels involved in the classification task is thought of as an observation space where human observations are made by selecting a label. The low dimensional appearance estimates returned by the filter allow for low cost communication in low bandwidth sensor networks. Deployment of the filter in such a network is demonstrated in an outdoor mapping application involving a human operator, a ground and an air vehicle.
Resumo:
Automobiles have deeply impacted the way in which we travel but they have also contributed to many deaths and injury due to crashes. A number of reasons for these crashes have been pointed out by researchers. Inexperience has been identified as a contributing factor to road crashes. Driver’s driving abilities also play a vital role in judging the road environment and reacting in-time to avoid any possible collision. Therefore driver’s perceptual and motor skills remain the key factors impacting on road safety. Our failure to understand what is really important for learners, in terms of competent driving, is one of the many challenges for building better training programs. Driver training is one of the interventions aimed at decreasing the number of crashes that involve young drivers. Currently, there is a need to develop comprehensive driver evaluation system that benefits from the advances in Driver Assistance Systems. A multidisciplinary approach is necessary to explain how driving abilities evolves with on-road driving experience. To our knowledge, driver assistance systems have never been comprehensively used in a driver training context to assess the safety aspect of driving. The aim and novelty of this thesis is to develop and evaluate an Intelligent Driver Training System (IDTS) as an automated assessment tool that will help drivers and their trainers to comprehensively view complex driving manoeuvres and potentially provide effective feedback by post processing the data recorded during driving. This system is designed to help driver trainers to accurately evaluate driver performance and has the potential to provide valuable feedback to the drivers. Since driving is dependent on fuzzy inputs from the driver (i.e. approximate distance calculation from the other vehicles, approximate assumption of the other vehicle speed), it is necessary that the evaluation system is based on criteria and rules that handles uncertain and fuzzy characteristics of the driving tasks. Therefore, the proposed IDTS utilizes fuzzy set theory for the assessment of driver performance. The proposed research program focuses on integrating the multi-sensory information acquired from the vehicle, driver and environment to assess driving competencies. After information acquisition, the current research focuses on automated segmentation of the selected manoeuvres from the driving scenario. This leads to the creation of a model that determines a “competency” criterion through the driving performance protocol used by driver trainers (i.e. expert knowledge) to assess drivers. This is achieved by comprehensively evaluating and assessing the data stream acquired from multiple in-vehicle sensors using fuzzy rules and classifying the driving manoeuvres (i.e. overtake, lane change, T-crossing and turn) between low and high competency. The fuzzy rules use parameters such as following distance, gaze depth and scan area, distance with respect to lanes and excessive acceleration or braking during the manoeuvres to assess competency. These rules that identify driving competency were initially designed with the help of expert’s knowledge (i.e. driver trainers). In-order to fine tune these rules and the parameters that define these rules, a driving experiment was conducted to identify the empirical differences between novice and experienced drivers. The results from the driving experiment indicated that significant differences existed between novice and experienced driver, in terms of their gaze pattern and duration, speed, stop time at the T-crossing, lane keeping and the time spent in lanes while performing the selected manoeuvres. These differences were used to refine the fuzzy membership functions and rules that govern the assessments of the driving tasks. Next, this research focused on providing an integrated visual assessment interface to both driver trainers and their trainees. By providing a rich set of interactive graphical interfaces, displaying information about the driving tasks, Intelligent Driver Training System (IDTS) visualisation module has the potential to give empirical feedback to its users. Lastly, the validation of the IDTS system’s assessment was conducted by comparing IDTS objective assessments, for the driving experiment, with the subjective assessments of the driver trainers for particular manoeuvres. Results show that not only IDTS was able to match the subjective assessments made by driver trainers during the driving experiment but also identified some additional driving manoeuvres performed in low competency that were not identified by the driver trainers due to increased mental workload of trainers when assessing multiple variables that constitute driving. The validation of IDTS emphasized the need for an automated assessment tool that can segment the manoeuvres from the driving scenario, further investigate the variables within that manoeuvre to determine the manoeuvre’s competency and provide integrated visualisation regarding the manoeuvre to its users (i.e. trainers and trainees). Through analysis and validation it was shown that IDTS is a useful assistance tool for driver trainers to empirically assess and potentially provide feedback regarding the manoeuvres undertaken by the drivers.
Resumo:
Planning on utilization of train-set is one of the key tasks of transport organization for passenger dedicated railway in China. It also has strong relationships with timetable scheduling and operation plans at a station. To execute such a task in a railway hub pooling multiple railway lines, the characteristics of multiple routing for train-set is discussed in term of semicircle of train-sets' turnover. In programming the described problem, the minimum dwell time is selected as the objectives with special derive constraints of the train-set's dispatch, the connecting conditions, the principle of uniqueness for train-sets, and the first plus for connection in the same direction based on time tolerance σ. A compact connection algorithm based on time tolerance is then designed. The feasibility of the model and the algorithm is proved by the case study. The result indicates that the circulation model and algorithm about multiple routing can deal with the connections between the train-sets of multiple directions, and reduce the train's pulling in or leaving impact on the station's throat.
Resumo:
The Acton Peninsula project alliance is the first project alliance in building construction in the world. The project alliance is set out to achieve the best possible outcome for the project with all participants in the alliance sharing both risks and rewards. The construction of the National Museum of Australia and the Australian Institute of Aboriginal and Torres Strait Islander Studies, on Acton Peninsula in Canberra, will be a significant Australian architectural and construction achievement. The design and construction project team is committed to achieve outstanding results in all aspects of the design, construction and delivery of this significant national project. Innovation and creativity are valued, and outstanding performance will be rewarded.
Resumo:
This article applies social network analysis techniques to a case study of police corruption in order to produce findings which will assist in corruption prevention and investigation. Police corruption is commonly studied but rarely are sophisticated tools of analyse engaged to add rigour to the field of study. This article analyses the ‘First Joke’ a systemic and long lasting corruption network in the Queensland Police Force, a state police agency in Australia. It uses the data obtained from a commission of inquiry which exposed the network and develops hypotheses as to the nature of the networks structure based on existing literature into dark networks and criminal networks. These hypotheses are tested by entering the data into UCINET and analysing the outcomes through social network analysis measures of average path distance, centrality and density. The conclusions reached show that the network has characteristics not predicted by the literature.
Resumo:
Genomic and proteomic analyses have attracted a great deal of interests in biological research in recent years. Many methods have been applied to discover useful information contained in the enormous databases of genomic sequences and amino acid sequences. The results of these investigations inspire further research in biological fields in return. These biological sequences, which may be considered as multiscale sequences, have some specific features which need further efforts to characterise using more refined methods. This project aims to study some of these biological challenges with multiscale analysis methods and stochastic modelling approach. The first part of the thesis aims to cluster some unknown proteins, and classify their families as well as their structural classes. A development in proteomic analysis is concerned with the determination of protein functions. The first step in this development is to classify proteins and predict their families. This motives us to study some unknown proteins from specific families, and to cluster them into families and structural classes. We select a large number of proteins from the same families or superfamilies, and link them to simulate some unknown large proteins from these families. We use multifractal analysis and the wavelet method to capture the characteristics of these linked proteins. The simulation results show that the method is valid for the classification of large proteins. The second part of the thesis aims to explore the relationship of proteins based on a layered comparison with their components. Many methods are based on homology of proteins because the resemblance at the protein sequence level normally indicates the similarity of functions and structures. However, some proteins may have similar functions with low sequential identity. We consider protein sequences at detail level to investigate the problem of comparison of proteins. The comparison is based on the empirical mode decomposition (EMD), and protein sequences are detected with the intrinsic mode functions. A measure of similarity is introduced with a new cross-correlation formula. The similarity results show that the EMD is useful for detection of functional relationships of proteins. The third part of the thesis aims to investigate the transcriptional regulatory network of yeast cell cycle via stochastic differential equations. As the investigation of genome-wide gene expressions has become a focus in genomic analysis, researchers have tried to understand the mechanisms of the yeast genome for many years. How cells control gene expressions still needs further investigation. We use a stochastic differential equation to model the expression profile of a target gene. We modify the model with a Gaussian membership function. For each target gene, a transcriptional rate is obtained, and the estimated transcriptional rate is also calculated with the information from five possible transcriptional regulators. Some regulators of these target genes are verified with the related references. With these results, we construct a transcriptional regulatory network for the genes from the yeast Saccharomyces cerevisiae. The construction of transcriptional regulatory network is useful for detecting more mechanisms of the yeast cell cycle.
Resumo:
The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.
Resumo:
Despite many arguments to the contrary, the three-act story structure, as propounded and refined by Hollywood continues to dominate the blockbuster and independent film markets. Recent successes in post-modern cinema could indicate new directions and opportunities for low-budget national cinemas.
Resumo:
This paper presents a fault diagnosis method based on adaptive neuro-fuzzy inference system (ANFIS) in combination with decision trees. Classification and regression tree (CART) which is one of the decision tree methods is used as a feature selection procedure to select pertinent features from data set. The crisp rules obtained from the decision tree are then converted to fuzzy if-then rules that are employed to identify the structure of ANFIS classifier. The hybrid of back-propagation and least squares algorithm are utilized to tune the parameters of the membership functions. In order to evaluate the proposed algorithm, the data sets obtained from vibration signals and current signals of the induction motors are used. The results indicate that the CART–ANFIS model has potential for fault diagnosis of induction motors.
Resumo:
The ways in which a society set standards of behaviour and of conduct for its members vary hugely. For example, accepted practices, recognised customs, spiritually or morally inspired norms, judicially declared rules, executively formulated edicts, formal legislative enactments or constitutionally embedded rights and duties. Whatever form they assume, these standards are the artificial construction of the human mind. Accordingly the law - whatever its form - can do no more and no less than regulate or set standards for human behaviour, human conduct, and human decision-making. The law cannot regulate the environment. It can only regulate human activities that impact directly or indirectly upon the environment. This applies as much to wetlands as components of the environment as it does to any other components of the environment or the environment at large. The capacity of the law to protect the environment and therefore wetlands is thus totally dependent upon the capacity of the law to regulate human behaviour, human conduct and human decision-making. At the same time the law needs to reflect the specific nature, functions and locations of wetlands. A wetland is an ecosystem by itself; it comprises a range of ecosystems within it; and it is part of a wider set of ecosystems. Hence, the significant ecological functions performed by wetlands. Then there are the benefits flowing to humans from wetlands. These may be social, economic, cultural, aesthetic, or a combination of some or of all of these. It is a challenge for a society acting through its legal system to find the appropriate balance between these ecological and these human values. But that is what sustainability requires.The ways in which a society set standards of behaviour and of conduct for its members vary hugely. For example, accepted practices, recognised customs, spiritually or morally inspired norms, judicially declared rules, executively formulated edicts, formal legislative enactments or constitutionally embedded rights and duties. Whatever form they assume, these standards are the artificial construction of the human mind. Accordingly the law - whatever its form - can do no more and no less than regulate or set standards for human behaviour, human conduct, and human decision-making. The law cannot regulate the environment. It can only regulate human activities that impact directly or indirectly upon the environment. This applies as much to wetlands as components of the environment as it does to any other components of the environment or the environment at large. The capacity of the law to protect the environment and therefore wetlands is thus totally dependent upon the capacity of the law to regulate human behaviour, human conduct and human decision-making. At the same time the law needs to reflect the specific nature, functions and locations of wetlands. A wetland is an ecosystem by itself; it comprises a range of ecosystems within it; and it is part of a wider set of ecosystems. Hence, the significant ecological functions performed by wetlands. Then there are the benefits flowing to humans from wetlands. These may be social, economic, cultural, aesthetic, or a combination of some or of all of these. It is a challenge for a society acting through its legal system to find the appropriate balance between these ecological and these human values. But that is what sustainability requires.
Resumo:
Optimal design for generalized linear models has primarily focused on univariate data. Often experiments are performed that have multiple dependent responses described by regression type models, and it is of interest and of value to design the experiment for all these responses. This requires a multivariate distribution underlying a pre-chosen model for the data. Here, we consider the design of experiments for bivariate binary data which are dependent. We explore Copula functions which provide a rich and flexible class of structures to derive joint distributions for bivariate binary data. We present methods for deriving optimal experimental designs for dependent bivariate binary data using Copulas, and demonstrate that, by including the dependence between responses in the design process, more efficient parameter estimates are obtained than by the usual practice of simply designing for a single variable only. Further, we investigate the robustness of designs with respect to initial parameter estimates and Copula function, and also show the performance of compound criteria within this bivariate binary setting.