830 resultados para Representation of time


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Chaque année, le piratage mondial de la musique coûte plusieurs milliards de dollars en pertes économiques, pertes d’emplois et pertes de gains des travailleurs ainsi que la perte de millions de dollars en recettes fiscales. La plupart du piratage de la musique est dû à la croissance rapide et à la facilité des technologies actuelles pour la copie, le partage, la manipulation et la distribution de données musicales [Domingo, 2015], [Siwek, 2007]. Le tatouage des signaux sonores a été proposé pour protéger les droit des auteurs et pour permettre la localisation des instants où le signal sonore a été falsifié. Dans cette thèse, nous proposons d’utiliser la représentation parcimonieuse bio-inspirée par graphe de décharges (spikegramme), pour concevoir une nouvelle méthode permettant la localisation de la falsification dans les signaux sonores. Aussi, une nouvelle méthode de protection du droit d’auteur. Finalement, une nouvelle attaque perceptuelle, en utilisant le spikegramme, pour attaquer des systèmes de tatouage sonore. Nous proposons tout d’abord une technique de localisation des falsifications (‘tampering’) des signaux sonores. Pour cela nous combinons une méthode à spectre étendu modifié (‘modified spread spectrum’, MSS) avec une représentation parcimonieuse. Nous utilisons une technique de poursuite perceptive adaptée (perceptual marching pursuit, PMP [Hossein Najaf-Zadeh, 2008]) pour générer une représentation parcimonieuse (spikegramme) du signal sonore d’entrée qui est invariante au décalage temporel [E. C. Smith, 2006] et qui prend en compte les phénomènes de masquage tels qu’ils sont observés en audition. Un code d’authentification est inséré à l’intérieur des coefficients de la représentation en spikegramme. Puis ceux-ci sont combinés aux seuils de masquage. Le signal tatoué est resynthétisé à partir des coefficients modifiés, et le signal ainsi obtenu est transmis au décodeur. Au décodeur, pour identifier un segment falsifié du signal sonore, les codes d’authentification de tous les segments intacts sont analysés. Si les codes ne peuvent être détectés correctement, on sait qu’alors le segment aura été falsifié. Nous proposons de tatouer selon le principe à spectre étendu (appelé MSS) afin d’obtenir une grande capacité en nombre de bits de tatouage introduits. Dans les situations où il y a désynchronisation entre le codeur et le décodeur, notre méthode permet quand même de détecter des pièces falsifiées. Par rapport à l’état de l’art, notre approche a le taux d’erreur le plus bas pour ce qui est de détecter les pièces falsifiées. Nous avons utilisé le test de l’opinion moyenne (‘MOS’) pour mesurer la qualité des systèmes tatoués. Nous évaluons la méthode de tatouage semi-fragile par le taux d’erreur (nombre de bits erronés divisé par tous les bits soumis) suite à plusieurs attaques. Les résultats confirment la supériorité de notre approche pour la localisation des pièces falsifiées dans les signaux sonores tout en préservant la qualité des signaux. Ensuite nous proposons une nouvelle technique pour la protection des signaux sonores. Cette technique est basée sur la représentation par spikegrammes des signaux sonores et utilise deux dictionnaires (TDA pour Two-Dictionary Approach). Le spikegramme est utilisé pour coder le signal hôte en utilisant un dictionnaire de filtres gammatones. Pour le tatouage, nous utilisons deux dictionnaires différents qui sont sélectionnés en fonction du bit d’entrée à tatouer et du contenu du signal. Notre approche trouve les gammatones appropriés (appelés noyaux de tatouage) sur la base de la valeur du bit à tatouer, et incorpore les bits de tatouage dans la phase des gammatones du tatouage. De plus, il est montré que la TDA est libre d’erreur dans le cas d’aucune situation d’attaque. Il est démontré que la décorrélation des noyaux de tatouage permet la conception d’une méthode de tatouage sonore très robuste. Les expériences ont montré la meilleure robustesse pour la méthode proposée lorsque le signal tatoué est corrompu par une compression MP3 à 32 kbits par seconde avec une charge utile de 56.5 bps par rapport à plusieurs techniques récentes. De plus nous avons étudié la robustesse du tatouage lorsque les nouveaux codec USAC (Unified Audion and Speech Coding) à 24kbps sont utilisés. La charge utile est alors comprise entre 5 et 15 bps. Finalement, nous utilisons les spikegrammes pour proposer trois nouvelles méthodes d’attaques. Nous les comparons aux méthodes récentes d’attaques telles que 32 kbps MP3 et 24 kbps USAC. Ces attaques comprennent l’attaque par PMP, l’attaque par bruit inaudible et l’attaque de remplacement parcimonieuse. Dans le cas de l’attaque par PMP, le signal de tatouage est représenté et resynthétisé avec un spikegramme. Dans le cas de l’attaque par bruit inaudible, celui-ci est généré et ajouté aux coefficients du spikegramme. Dans le cas de l’attaque de remplacement parcimonieuse, dans chaque segment du signal, les caractéristiques spectro-temporelles du signal (les décharges temporelles ;‘time spikes’) se trouvent en utilisant le spikegramme et les spikes temporelles et similaires sont remplacés par une autre. Pour comparer l’efficacité des attaques proposées, nous les comparons au décodeur du tatouage à spectre étendu. Il est démontré que l’attaque par remplacement parcimonieux réduit la corrélation normalisée du décodeur de spectre étendu avec un plus grand facteur par rapport à la situation où le décodeur de spectre étendu est attaqué par la transformation MP3 (32 kbps) et 24 kbps USAC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Heart disease is attributed as the highest cause of death in the world. Although this could be alleviated by heart transplantation, there is a chronic shortage of donor hearts and so mechanical solutions are being considered. Currently, many Ventricular Assist Devices (VADs) are being developed worldwide in an effort to increase life expectancy and quality of life for end stage heart failure patients. Current pre-clinical testing methods for VADs involve laboratory testing using Mock Circulation Loops (MCLs), and in vivo testing in animal models. The research and development of highly accurate MCLs is vital to the continuous improvement of VAD performance. The first objective of this study was to develop and validate a mathematical model of a MCL. This model could then be used in the design and construction of a variable compliance chamber to improve the performance of an existing MCL as well as form the basis for a new miniaturised MCL. An extensive review of literature was carried out on MCLs and mathematical modelling of their function. A mathematical model of a MCL was then created in the MATLAB/SIMULINK environment. This model included variable features such as resistance, fluid inertia and volumes (resulting from the pipe lengths and diameters); compliance of Windkessel chambers, atria and ventricles; density of both fluid and compressed air applied to the system; gravitational effects on vertical columns of fluid; and accurately modelled actuators controlling the ventricle contraction. This model was then validated using the physical properties and pressure and flow traces produced from a previously developed MCL. A variable compliance chamber was designed to reproduce parameters determined by the mathematical model. The function of the variability was achieved by controlling the transmural pressure across a diaphragm to alter the compliance of the system. An initial prototype was tested in a previously developed MCL, and a variable level of arterial compliance was successfully produced; however, the complete range of compliance values required for accurate physiological representation was not able to be produced with this initial design. The mathematical model was then used to design a smaller physical mock circulation loop, with the tubing sizes adjusted to produce accurate pressure and flow traces whilst having an appropriate frequency response characteristic. The development of the mathematical model greatly assisted the general design of an in vitro cardiovascular device test rig, while the variable compliance chamber allowed simple and real-time manipulation of MCL compliance to allow accurate transition between a variety of physiological conditions. The newly developed MCL produced an accurate design of a mechanical representation of the human circulatory system for in vitro cardiovascular device testing and education purposes. The continued improvement of VAD test rigs is essential if VAD design is to improve, and hence improve quality of life and life expectancy for heart failure patients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ability to forecast machinery failure is vital to reducing maintenance costs, operation downtime and safety hazards. Recent advances in condition monitoring technologies have given rise to a number of prognostic models for forecasting machinery health based on condition data. Although these models have aided the advancement of the discipline, they have made only a limited contribution to developing an effective machinery health prognostic system. The literature review indicates that there is not yet a prognostic model that directly models and fully utilises suspended condition histories (which are very common in practice since organisations rarely allow their assets to run to failure); that effectively integrates population characteristics into prognostics for longer-range prediction in a probabilistic sense; which deduces the non-linear relationship between measured condition data and actual asset health; and which involves minimal assumptions and requirements. This work presents a novel approach to addressing the above-mentioned challenges. The proposed model consists of a feed-forward neural network, the training targets of which are asset survival probabilities estimated using a variation of the Kaplan-Meier estimator and a degradation-based failure probability density estimator. The adapted Kaplan-Meier estimator is able to model the actual survival status of individual failed units and estimate the survival probability of individual suspended units. The degradation-based failure probability density estimator, on the other hand, extracts population characteristics and computes conditional reliability from available condition histories instead of from reliability data. The estimated survival probability and the relevant condition histories are respectively presented as “training target” and “training input” to the neural network. The trained network is capable of estimating the future survival curve of a unit when a series of condition indices are inputted. Although the concept proposed may be applied to the prognosis of various machine components, rolling element bearings were chosen as the research object because rolling element bearing failure is one of the foremost causes of machinery breakdowns. Computer simulated and industry case study data were used to compare the prognostic performance of the proposed model and four control models, namely: two feed-forward neural networks with the same training function and structure as the proposed model, but neglected suspended histories; a time series prediction recurrent neural network; and a traditional Weibull distribution model. The results support the assertion that the proposed model performs better than the other four models and that it produces adaptive prediction outputs with useful representation of survival probabilities. This work presents a compelling concept for non-parametric data-driven prognosis, and for utilising available asset condition information more fully and accurately. It demonstrates that machinery health can indeed be forecasted. The proposed prognostic technique, together with ongoing advances in sensors and data-fusion techniques, and increasingly comprehensive databases of asset condition data, holds the promise for increased asset availability, maintenance cost effectiveness, operational safety and – ultimately – organisation competitiveness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In children, joint hypermobility (typified by structural instability of joints) manifests clinically as neuro-muscular and musculo-skeletal conditions and conditions associated with development and organization of control of posture and gait (Finkelstein, 1916; Jahss, 1919; Sobel, 1926; Larsson, Mudholkar, Baum and Srivastava, 1995; Murray and Woo, 2001; Hakim and Grahame, 2003; Adib, Davies, Grahame, Woo and Murray, 2005:). The process of control of the relative proportions of joint mobility and stability, whilst maintaining equilibrium in standing posture and gait, is dependent upon the complex interrelationship between skeletal, muscular and neurological function (Massion, 1998; Gurfinkel, Ivanenko, Levik and Babakova, 1995; Shumway-Cook and Woollacott, 1995). The efficiency of this relies upon the integrity of neuro-muscular and musculo-skeletal components (ligaments, muscles, nerves), and the Central Nervous System’s capacity to interpret, process and integrate sensory information from visual, vestibular and proprioceptive sources (Crotts, Thompson, Nahom, Ryan and Newton, 1996; Riemann, Guskiewicz and Shields, 1999; Schmitz and Arnold, 1998) and development and incorporation of this into a representational scheme (postural reference frame) of body orientation with respect to internal and external environments (Gurfinkel et al., 1995; Roll and Roll, 1988). Sensory information from the base of support (feet) makes significant contribution to the development of reference frameworks (Kavounoudias, Roll and Roll, 1998). Problems with the structure and/ or function of any one, or combination of these components or systems, may result in partial loss of equilibrium and, therefore ineffectiveness or significant reduction in the capacity to interact with the environment, which may result in disability and/ or injury (Crotts et al., 1996; Rozzi, Lephart, Sterner and Kuligowski, 1999b). Whilst literature focusing upon clinical associations between joint hypermobility and conditions requiring therapeutic intervention has been abundant (Crego and Ford, 1952; Powell and Cantab, 1983; Dockery, in Jay, 1999; Grahame, 1971; Childs, 1986; Barton, Bird, Lindsay, Newton and Wright, 1995a; Rozzi, et al., 1999b; Kerr, Macmillan, Uttley and Luqmani, 2000; Grahame, 2001), there has been a deficit in controlled studies in which the neuro-muscular and musculo-skeletal characteristics of children with joint hypermobility have been quantified and considered within the context of organization of postural control in standing balance and gait. This was the aim of this project, undertaken as three studies. The major study (Study One) compared the fundamental neuro-muscular and musculo-skeletal characteristics of 15 children with joint hypermobility, and 15 age (8 and 9 years), gender, height and weight matched non-hypermobile controls. Significant differences were identified between previously undiagnosed hypermobile (n=15) and non-hypermobile children (n=15) in passive joint ranges of motion of the lower limbs and lumbar spine, muscle tone of the lower leg and foot, barefoot CoP displacement and in parameters of barefoot gait. Clinically relevant differences were also noted in barefoot single leg balance time. There were no differences between groups in isometric muscle strength in ankle dorsiflexion, knee flexion or extension. The second comparative study investigated foot morphology in non-weight bearing and weight bearing load conditions of the same children with and without joint hypermobility using three dimensional images (plaster casts) of their feet. The preliminary phase of this study evaluated the casting technique against direct measures of foot length, forefoot width, RCSP and forefoot to rearfoot angle. Results indicated accurate representation of elementary foot morphology within the plaster images. The comparative study examined the between and within group differences in measures of foot length and width, and in measures above the support surface (heel inclination angle, forefoot to rearfoot angle, normalized arch height, height of the widest point of the heel) in the two load conditions. Results of measures from plaster images identified that hypermobile children have different barefoot weight bearing foot morphology above the support surface than non-hypermobile children, despite no differences in measures of foot length or width. Based upon the differences in components of control of posture and gait in the hypermobile group, identified in Study One and Study Two, the final study (Study Three), using the same subjects, tested the immediate effect of specifically designed custom-made foot orthoses upon balance and gait of hypermobile children. The design of the orthoses was evaluated against the direct measures and the measures from plaster images of the feet. This ascertained the differences in morphology of the modified casts used to mould the orthoses and the original image of the foot. The orthoses were fitted into standardized running shoes. The effect of the shoe alone was tested upon the non-hypermobile children as the non-therapeutic equivalent condition. Immediate improvement in balance was noted in single leg stance and CoP displacement in the hypermobile group together with significant immediate improvement in the percentage of gait phases and in the percentage of the gait cycle at which maximum plantar flexion of the ankle occurred in gait. The neuro-muscular and musculo-skeletal characteristics of children with joint hypermobility are different from those of non-hypermobile children. The Beighton, Solomon and Soskolne (1973) screening criteria successfully classified joint hypermobility in children. As a result of this study joint hypermobility has been identified as a variable which must be controlled in studies of foot morphology and function in children. The outcomes of this study provide a basis upon which to further explore the association between joint hypermobility and neuro-muscular and musculo-skeletal conditions, and, have relevance for the physical education of children with joint hypermobility, for footwear and orthotic design processes, and, in particular, for clinical identification and treatment of children with joint hypermobility.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Time Alone is the introductory image to the exhibition Lightsite, which toured Western Australian galleries from February 2006 to November 2007. It is a five-minute-long exposure photographic image captured inside an abandoned building which the author converted into a camera obscura. It depicts an inverted image of the outside environment and the text 'time' - which is constructed by torch-light within the building interior and during the photographic exposure. The image evokes isolation and the temporality of inhabitation within the remote farmlands of the Great Southern Region of Western Australia: the region of focus for all of the twelve works in Lightsite. Indeed the owner of this now-abandoned house passed away and was not found for a week - bringing poignancy to the central theme of this creative work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recovering position from sensor information is an important problem in mobile robotics, known as localisation. Localisation requires a map or some other description of the environment to provide the robot with a context to interpret sensor data. The mobile robot system under discussion is using an artificial neural representation of position. Building a geometrical map of the environment with a single camera and artificial neural networks is difficult. Instead it would be simpler to learn position as a function of the visual input. Usually when learning images, an intermediate representation is employed. An appropriate starting point for biologically plausible image representation is the complex cells of the visual cortex, which have invariance properties that appear useful for localisation. The effectiveness for localisation of two different complex cell models are evaluated. Finally the ability of a simple neural network with single shot learning to recognise these representations and localise a robot is examined.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

According to statistics and trend data, women continue to be substantially under- represented in the Australian professoriate, and growth in their representation has been slow despite the plethora of equity programs. While not disputing these facts, we propose that examining gender equity by cohort provides a complementary perspective on the status of gender equity in the professoriate. Based on over 500 survey responses, we detected substantial similarities between women and men who were appointed as professors or associate professors between 2005 and 2008. There were similar proportions of women and men appointed via external or internal processes or by invitation. Additionally, similar proportions of women and men professors expressed a marked preference for research over teaching. Furthermore, there were similar distributions between the genders in the age of appointment to the professoriate. However, a notable gender difference was that women were appointed to the professoriate on average 1.9 years later than mens. This later appointment provides one reason for the lower representation of women compared to men in the professoriate. It also raises questions of the typical length of time that women and men remain in the (paid) professoriate and reasons why they might leave it. A further similarity between women and men in this cohort was their identification of motivation and circumstances as key factors in their career orientation. However, substantially more women identified motivation than circumstances and the situation was reversed for men. The open-ended survey responses also provided confirmation that affirmative action initiatives make a difference to women’s careers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

International assessments of student science achievement, and growing evidence of students' waning interest in school science, have ensured that the development of scientific literacy continues to remain an important educational priority. Furthermore, researchers have called for teaching and learning strategies to engage students in the learning of science, particularly in the middle years of schooling. This study extends previous national and international research that has established a link between writing and learning science. Specifically, it investigates the learning experiences of eight intact Year 9 science classes as they engage in the writing of short stories that merge scientific and narrative genres (i.e., hybridised scientific narratives) about the socioscientific issue of biosecurity. This study employed a triangulation mixed methods research design, generating both quantitative and qualitative data, in order to investigate three research questions that examined the extent to which the students' participation in the study enhanced their scientific literacy; the extent to which the students demonstrated conceptual understanding of related scientific concepts through their written artefacts and in interviews about the artefacts; and the extent to which the students' participation in the project influenced their attitudes toward science and science learning. Three aspects of scientific literacy were investigated in this study: conceptual science understandings (a derived sense of scientific literacy), the students' transformation of scientific information in written stories about biosecurity (simple and expanded fundamental senses of scientific literacy), and attitudes toward science and science learning. The stories written by students in a selected case study class (N=26) were analysed quantitatively using a series of specifically-designed matrices that produce numerical scores that reflect students' developing fundamental and derived senses of scientific literacy. All students (N=152) also completed a Likert-style instrument (i.e., BioQuiz), pretest and posttest, that examined their interest in learning science, science self-efficacy, their perceived personal and general value of science, their familiarity with biosecurity issues, and their attitudes toward biosecurity. Socioscientific issues (SSI) education served as a theoretical framework for this study. It sought to investigate an alternative discourse with which students can engage in the context of SSI education, and the role of positive attitudes in engaging students in the negotiation of socioscientific issues. Results of the study have revealed that writing BioStories enhanced selected aspects of the participants' attitudes toward science and science learning, and their awareness and conceptual understanding of issues relating to biosecurity. Furthermore, the students' written artefacts alone did not provide an accurate representation of the level of their conceptual science understandings. An examination of these artefacts in combination with interviews about the students' written work provided a more comprehensive assessment of their developing scientific literacy. These findings support extensive calls for the utilisation of diversified writing-to-learn strategies in the science classroom, and therefore make a significant contribution to the writing-to-learn science literature, particularly in relation to the use of hybridised scientific genres. At the same time, this study presents the argument that the writing of hybridised scientific narratives such as BioStories can be used to complement the types of written discourse with which students engage in the negotiation of socioscientific issues, namely, argumentation, as the development of positive attitudes toward science and science learning can encourage students' participation in the discourse of science. The implications of this study for curricular design and implementation, and for further research, are also discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Typical daily decision-making process of individuals regarding use of transport system involves mainly three types of decisions: mode choice, departure time choice and route choice. This paper focuses on the mode and departure time choice processes and studies different model specifications for a combined mode and departure time choice model. The paper compares different sets of explanatory variables as well as different model structures to capture the correlation among alternatives and taste variations among the commuters. The main hypothesis tested in this paper is that departure time alternatives are also correlated by the amount of delay. Correlation among different alternatives is confirmed by analyzing different nesting structures as well as error component formulations. Random coefficient logit models confirm the presence of the random taste heterogeneity across commuters. Mixed nested logit models are estimated to jointly account for the random taste heterogeneity and the correlation among different alternatives. Results indicate that accounting for the random taste heterogeneity as well as inter-alternative correlation improves the model performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fractional Fokker-Planck equations (FFPEs) have gained much interest recently for describing transport dynamics in complex systems that are governed by anomalous diffusion and nonexponential relaxation patterns. However, effective numerical methods and analytic techniques for the FFPE are still in their embryonic state. In this paper, we consider a class of time-space fractional Fokker-Planck equations with a nonlinear source term (TSFFPE-NST), which involve the Caputo time fractional derivative (CTFD) of order α ∈ (0, 1) and the symmetric Riesz space fractional derivative (RSFD) of order μ ∈ (1, 2). Approximating the CTFD and RSFD using the L1-algorithm and shifted Grunwald method, respectively, a computationally effective numerical method is presented to solve the TSFFPE-NST. The stability and convergence of the proposed numerical method are investigated. Finally, numerical experiments are carried out to support the theoretical claims.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present the application of a non-linear dimensionality reduction technique for the learning and probabilistic classification of hyperspectral image. Hyperspectral image spectroscopy is an emerging technique for geological investigations from airborne or orbital sensors. It gives much greater information content per pixel on the image than a normal colour image. This should greatly help with the autonomous identification of natural and manmade objects in unfamiliar terrains for robotic vehicles. However, the large information content of such data makes interpretation of hyperspectral images time-consuming and userintensive. We propose the use of Isomap, a non-linear manifold learning technique combined with Expectation Maximisation in graphical probabilistic models for learning and classification. Isomap is used to find the underlying manifold of the training data. This low dimensional representation of the hyperspectral data facilitates the learning of a Gaussian Mixture Model representation, whose joint probability distributions can be calculated offline. The learnt model is then applied to the hyperspectral image at runtime and data classification can be performed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Queer university student print media often represents capitalism in a framework which could be classified as Marxism. However, at the same time, queer student media extensively publishes ideas which could be classified as academic queer theory. This chapter features analysis of these representations from the 2003, 2004 and 2006 editions of national queer student publication, Querelle, and from a sample of queer student media from four Australian universities. The perspectives of Marxism and academic queer theory are often argued to be contradictory (See for example, Hennessy 1994; Morton 1996b; Kirsch 2007), and thus the students’ application of these theories in tandem could be considered problematic. McKee asks ‘Who gets to be an intellectual?’ (2004) and suggests that the intellectualising undertaken by mainstream and alternative cultural creators is just as valid as that undertaken by university academics. He also raises concerns that the concept of theory is seen to be kept separate from everyday culture (McKee 2002). This chapter argues that in the construction and representation of their politics in this manner the queer student activists are creating their own version of queer theory. This analysis of queer student media contributes to research on queer communities and queer theory, demonstrating how one specific cultural subset theorises queerness and queer politics, thereby contributing to the genealogy of queer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: The 2003 Bureau of Labor Statistics American Time Use Survey (ATUS) contains 438 distinct primary activity variables that can be analyzed with regard to how time is spent by Americans. The Compendium of Physical Activities is used to code physical activities derived from various surveys, logs, diaries, etc to facilitate comparison of coded intensity levels across studies. ------ ----- Methods: This paper describes the methods, challenges, and rationale for linking Compendium estimates of physical activity intensity (METs, metabolic equivalents) with all activities reported in the 2003 ATUS. ----- ----- Results: The assigned ATUS intensity levels are not intended to compute the energy costs of physical activity in individuals. Instead, they are intended to be used to identify time spent in activities broadly classified by type and intensity. This function will complement public health surveillance systems and aid in policy and health-promotion activities. For example, at least one of the future projects of this process is the descriptive epidemiology of time spent in common physical activity intensity categories. ----- ----- Conclusions: The process of metabolic coding of the ATUS by linking it with the Compendium of Physical Activities can make important contributions to our understanding of Americans’ time spent in health-related physical activity.