59 resultados para Points and lines
Resumo:
Balancing between the provision of high quality of service and running within a tight budget is one of the biggest challenges for most metro railway operators around the world. Conventionally, one possible approach for the operator to adjust the time schedule is to alter the stop time at stations, if other system constraints, such as traction equipment characteristic, are not taken into account. Yet it is not an effective, flexible and economical method because the run-time of a train simply cannot be extended without limitation, and a balance between run-time and energy consumption has to be maintained. Modification or installation of a new signalling system not only increases the capital cost, but also affects the normal train service. Therefore, in order to procure a more effective, flexible and economical means to improve the quality of service, optimisation of train performance by coasting point identification has become more attractive and popular. However, identifying the necessary starting points for coasting under the constraints of current service conditions is no simple task because train movement is attributed by a large number of factors, most of which are non-linear and inter-dependent. This paper presents an application of genetic algorithms (GA) to search for the appropriate coasting points and investigates the possible improvement on computation time and fitness of genes.
Resumo:
We present several new observations on the SMS4 block cipher, and discuss their cryptographic significance. The crucial observation is the existence of fixed points and also of simple linear relationships between the bits of the input and output words for each component of the round functions for some input words. This implies that the non-linear function T of SMS4 does not appear random and that the linear transformation provides poor diffusion. Furthermore, the branch number of the linear transformation in the key scheduling algorithm is shown to be less than optimal. The main security implication of these observations is that the round function is not always non-linear. Due to this linearity, it is possible to reduce the number of effective rounds of SMS4 by four. We also investigate the susceptibility of SMS4 to further cryptanalysis. Finally, we demonstrate a successful differential attack on a slightly modified variant of SMS4. These findings raise serious questions on the security provided by SMS4.
Resumo:
While unlicensed driving does not play a direct causative role in road crashes, it represents a major problem for road safety. A particular subgroup of concern is those offenders who continue to drive after having their licence disqualified for drink driving. Surveys of disqualified drivers suggest that driving among this group is relatively common. Method This paper reports findings from an analysis of the driving records of over 545,000 Queensland drivers who experienced a licence sanction between January 2003 and December 2008. The sample included drivers who were disqualified by a court (e.g., for drink driving); those who licence had been suspended administratively (e.g., for accumulation of demerit points); and those who were placed on a restricted licence. Results Overall, 95,461 of the drivers in the sample were disqualified from driving for a drink driving offence. During the period, these drivers were issued with a total of 2,644,619 traffic infringements with approximately 12% (n = 8, 095) convicted of a further drink driving offence while disqualified. Other traffic offences detected during this period including unlicensed driving (18%), driving an unregistered vehicle (27%), speeding (21%), dangerous driving (36%), mobile phone use (35%), non-restraint use (32%), and other moving violation (23%). Offending behaviour was more common among men than women. Conclusions While licence disqualification has previously been shown to be a relatively effective sanction for managing the behaviour of drink driving offenders, the results of the current study highlight that it is a far from perfect tool since many offenders continue to commit both drink driving and other traffic offences while disqualified. As such, this study highlights the ongoing need to enhance the detection of disqualified and unlicensed driving in order to deter this behaviour.
Resumo:
This paper presents a method for calculating the in-bucket payload volume on a dragline for the purpose of estimating the material’s bulk density in real-time. Knowledge of the bulk density can provide instant feedback to mine planning and scheduling to improve blasting and in turn provide a more uniform bulk density across the excavation site. Furthermore costs and emissions in dragline operation, maintenance and downstream material processing can be reduced. The main challenge is to determine an accurate position and orientation of the bucket with the constraint of real-time performance. The proposed solution uses a range bearing and tilt sensor to locate and scan the bucket between the lift and dump stages of the dragline cycle. Various scanning strategies are investigated for their benefits in this real-time application. The bucket is segmented from the scene using cluster analysis while the pose of the bucket is calculated using the iterative closest point (ICP) algorithm. Payload points are segmented from the bucket by a fixed distance neighbour clustering method to preserve boundary points and exclude low density clusters introduced by overhead chains and the spreader bar. A height grid is then used to represent the payload from which the volume can be calculated by summing over the grid cells. We show volume calculated on a scaled system with an accuracy of greater than 95 per cent.
Resumo:
Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive semidefinite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space - classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semidefinite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -using the labeled part of the data one can learn an embedding also for the unlabeled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method for learning the 2-norm soft margin parameter in support vector machines, solving an important open problem.
Resumo:
Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive definite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space -- classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semi-definite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -- using the labelled part of the data one can learn an embedding also for the unlabelled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method to learn the 2-norm soft margin parameter in support vector machines, solving another important open problem. Finally, the novel approach presented in the paper is supported by positive empirical results.
Resumo:
Anybody who has attempted to publish some aspect of their work in an academic journal will know that it isn’t as easy as it may seem. The amount of preparation required of a manuscript can be quite daunting. Besides actually writing the manuscript, the authors are faced with a number of technical requirements. Each journal has their own formatting requirements, relating not only to section headings and text layout, but also to very small details such as placement of commas in reference lists. Then, if presenting data in the form of figures, this must be formatted so that it can be understood by the readership, and most journals still require that the data be in a format which can be read when printed in black-and-white. Most daunting (and important) of all, for the article to be scientifically valid it must be absolutely true in the representation of the work reported (i.e. all data must be shown unless a strong justification exists for removing data points), and this might cause angst in the mind of the authors when the results aren’t clear or possibly contradict the expected or desired result.
Resumo:
The most common software analysis tools available for measuring fluorescence images are for two-dimensional (2D) data that rely on manual settings for inclusion and exclusion of data points, and computer-aided pattern recognition to support the interpretation and findings of the analysis. It has become increasingly important to be able to measure fluorescence images constructed from three-dimensional (3D) datasets in order to be able to capture the complexity of cellular dynamics and understand the basis of cellular plasticity within biological systems. Sophisticated microscopy instruments have permitted the visualization of 3D fluorescence images through the acquisition of multispectral fluorescence images and powerful analytical software that reconstructs the images from confocal stacks that then provide a 3D representation of the collected 2D images. Advanced design-based stereology methods have progressed from the approximation and assumptions of the original model-based stereology(1) even in complex tissue sections(2). Despite these scientific advances in microscopy, a need remains for an automated analytic method that fully exploits the intrinsic 3D data to allow for the analysis and quantification of the complex changes in cell morphology, protein localization and receptor trafficking. Current techniques available to quantify fluorescence images include Meta-Morph (Molecular Devices, Sunnyvale, CA) and Image J (NIH) which provide manual analysis. Imaris (Andor Technology, Belfast, Northern Ireland) software provides the feature MeasurementPro, which allows the manual creation of measurement points that can be placed in a volume image or drawn on a series of 2D slices to create a 3D object. This method is useful for single-click point measurements to measure a line distance between two objects or to create a polygon that encloses a region of interest, but it is difficult to apply to complex cellular network structures. Filament Tracer (Andor) allows automatic detection of the 3D neuronal filament-like however, this module has been developed to measure defined structures such as neurons, which are comprised of dendrites, axons and spines (tree-like structure). This module has been ingeniously utilized to make morphological measurements to non-neuronal cells(3), however, the output data provide information of an extended cellular network by using a software that depends on a defined cell shape rather than being an amorphous-shaped cellular model. To overcome the issue of analyzing amorphous-shaped cells and making the software more suitable to a biological application, Imaris developed Imaris Cell. This was a scientific project with the Eidgenössische Technische Hochschule, which has been developed to calculate the relationship between cells and organelles. While the software enables the detection of biological constraints, by forcing one nucleus per cell and using cell membranes to segment cells, it cannot be utilized to analyze fluorescence data that are not continuous because ideally it builds cell surface without void spaces. To our knowledge, at present no user-modifiable automated approach that provides morphometric information from 3D fluorescence images has been developed that achieves cellular spatial information of an undefined shape (Figure 1). We have developed an analytical platform using the Imaris core software module and Imaris XT interfaced to MATLAB (Mat Works, Inc.). These tools allow the 3D measurement of cells without a pre-defined shape and with inconsistent fluorescence network components. Furthermore, this method will allow researchers who have extended expertise in biological systems, but not familiarity to computer applications, to perform quantification of morphological changes in cell dynamics.
Resumo:
Plug-in electric vehicles will soon be connected to residential distribution networks in high quantities and will add to already overburdened residential feeders. However, as battery technology improves, plug-in electric vehicles will also be able to support networks as small distributed generation units by transferring the energy stored in their battery into the grid. Even though the increase in the plug-in electric vehicle connection is gradual, their connection points and charging/discharging levels are random. Therefore, such single-phase bidirectional power flows can have an adverse effect on the voltage unbalance of a three-phase distribution network. In this article, a voltage unbalance sensitivity analysis based on charging/discharging levels and the connection point of plug-in electric vehicles in a residential low-voltage distribution network is presented. Due to the many uncertainties in plug-in electric vehicle ratings and connection points and the network load, a Monte Carlo-based stochastic analysis is developed to predict voltage unbalance in the network in the presence of plug-in electric vehicles. A failure index is introduced to demonstrate the probability of non-standard voltage unbalance in the network due to plug-in electric vehicles.
Resumo:
Background Nontuberculous mycobacteria (NTM) are normal inhabitants of a variety of environmental reservoirs including natural and municipal water. The aim of this study was to document the variety of species of NTM in potable water in Brisbane, QLD, with a specific interest in the main pathogens responsible for disease in this region and to explore factors associated with the isolation of NTM. One-litre water samples were collected from 189 routine collection sites in summer and 195 sites in winter. Samples were split, with half decontaminated with CPC 0.005%, then concentrated by filtration and cultured on 7H11 plates in MGIT tubes (winter only). Results Mycobacteria were grown from 40.21% sites in Summer (76/189) and 82.05% sites in winter (160/195). The winter samples yielded the greatest number and variety of mycobacteria as there was a high degree of subculture overgrowth and contamination in summer. Of those samples that did yield mycobacteria in summer, the variety of species differed from those isolated in winter. The inclusion of liquid media increased the yield for some species of NTM. Species that have been documented to cause disease in humans residing in Brisbane that were also found in water include M. gordonae, M. kansasii, M. abscessus, M. chelonae, M. fortuitum complex, M. intracellulare, M. avium complex, M. flavescens, M. interjectum, M. lentiflavum, M. mucogenicum, M. simiae, M. szulgai, M. terrae. M. kansasii was frequently isolated, but M. avium and M. intracellulare (the main pathogens responsible for disease is QLD) were isolated infrequently. Distance of sampling site from treatment plant in summer was associated with isolation of NTM. Pathogenic NTM (defined as those known to cause disease in QLD) were more likely to be identified from sites with narrower diameter pipes, predominantly distribution sample points, and from sites with asbestos cement or modified PVC pipes. Conclusions NTM responsible for human disease can be found in large urban water distribution systems in Australia. Based on our findings, additional point chlorination, maintenance of more constant pressure gradients in the system, and the utilisation of particular pipe materials should be considered.
Resumo:
Aim: To examine if fasting affects serum bilirubin levels in clinical healthy males and females. Methods: We utilised retrospective data from phase 1 clinical trials where blood was collected in either a fed or fasting state at screening and pre-dosing time points and analysed for total bilirubin levels as per standard clinical procedures. Participants were clinically healthy males (n = 105) or females (n = 30) aged 18 to 48 inclusive who participated in a phase 1 clinical trial in 2012 or 2013. Results: We found a statistically significant increase in total serum bilirubin levels in fasting males as compared to non-fasting males. The fasting time correlated positively with increased bilirubin levels. The age of the healthy males did not correlate with their fasting bilirubin level. We found no correlation between fasting and bilirubin levels in clinically normal females. Conclusions: The recruitment and screening of volunteers for a clinical trial is a time-consuming and expensive process. This study clearly demonstrates that testing for serum bilirubin should be conducted on non-fasting male subjects. If fasting is required, then participants should not be excluded from a trial based on an elevated serum bilirubin that is deemed non-clinically significant.
Resumo:
Software engineers constantly deal with problems of designing, analyzing, and improving process specifications, e.g., source code, service compositions, or process models. Process specifications are abstractions of behavior observed or intended to be implemented in reality which result from creative engineering practice. Usually, process specifications are formalized as directed graphs in which edges capture temporal relations between decisions, synchronization points, and work activities. Every process specification is a compromise between two points: On the one hand engineers strive to operate with less modeling constructs which conceal irrelevant details, while on the other hand the details are required to achieve the desired level of customization for envisioned process scenarios. In our research, we approach the problem of varying abstraction levels of process specifications. Formally, developed abstraction mechanisms exploit the structure of a process specification and allow the generalization of low-level details into concepts of a higher abstraction level. The reverse procedure can be addressed as process specialization.
Resumo:
Background Lumbar Epidural Steroids Injections (ESI’s) have previously been shown to provide some degree of pain relief in sciatica. Number Needed To Treat (NNT) to achieve 50% pain relief has been estimated at 7 from the results of randomised controlled trials. Pain relief is temporary. They remain one of the most commonly provided procedures in the UK. It is unknown whether this pain relief represents good value for money. Methods 228 patients were randomised into a multi-centre Double Blind Randomised Controlled Trial. Subjects received up to 3 ESI’s or intra-spinous saline depending on response and fall off with the first injection. All other treatments were permitted. All received a review of analgesia, education and physical therapy. Quality of life was assessed using the SF36 at 6 points and compared using independent sample t-tests. Follow up was up to 1 yr. Missing data was imputed using last observation carried forward (LOCF). QALY’s (Quality of Life Years) were derived from preference based heath values (summary health utility score). SF-6D health state classification was derived from SF-36 raw score data. Standard gambles (SG) were calculated using Model 10. SG scores were calculated on trial results. LOCF was not used for this. Instead average SG were derived for a subset of patients with observations for all visits up to week 12. Incremental QALY’s were derived as the difference in the area between the SG curve for the active group and placebo group. Results SF36 domains showed a significant improvement in pain at week 3 but this was not sustained (mean 54 Active vs 61 Placebo P<0.05). Other domains did not show any significant gains compared with placebo. For derivation of SG the number in the sample in each period differed. In week 12, average SG scores for active and placebo converged. In other words, the health gain for the active group as measured by SG was achieved by the placebo group by week 12. The incremental QALY gained for a patient under the trial protocol compared with the standard care package was 0.0059350. This is equivalent to an additional 2.2 days of full health. The cost per QALY gained to the provider from a patient management strategy administering one epidural as suggested by results was £25 745.68. This result was derived assuming that the gain in QALY data calculated for patients under the trial protocol would approximate that under a patient management strategy based on the trial results (one ESI). This is above the threshold suggested by some as a cost effective treatment. Conclusions The transient benefit in pain relief afforded by ESI’s does not appear to be cost-effective. Further work is needed to develop more cost-effective conservative treatments for sciatica.
Resumo:
Tissue engineering and cell implantation therapies are gaining popularity because of their potential to repair and regenerate tissues and organs. To investigate the role of inflammatory cytokines in new tissue development in engineered tissues, we have characterized the nature and timing of cell populations forming new adipose tissue in a mouse tissue engineering chamber (TEC) and characterized the gene and protein expression of cytokines in the newly developing tissues. EGFP-labeled bone marrow transplant mice and MacGreen mice were implanted with TEC for periods ranging from 0.5 days to 6 weeks. Tissues were collected at various time points and assessed for cytokine expression through ELISA and mRNA analysis or labeled for specific cell populations in the TEC. Macrophage-derived factors, such as monocyte chemotactic protein-1 (MCP-1), appear to induce adipogenesis by recruiting macrophages and bone marrow-derived precursor cells to the TEC at early time points, with a second wave of nonbone marrow-derived progenitors. Gene expression analysis suggests that TNFα, LCN-2, and Interleukin 1β are important in early stages of neo-adipogenesis. Increasing platelet-derived growth factor and vascular endothelial cell growth factor expression at early time points correlates with preadipocyte proliferation and induction of angiogenesis. This study provides new information about key elements that are involved in early development of new adipose tissue.
Resumo:
This (seat) attribute target list and Design for Comfort taxonomy report is based on the literature review report (C3-21, Milestone 1), which specified different areas (factors) with specific influence on automotive seat comfort. The attribute target list summarizes seat factors established in the literature review (Figure 1) and subsumes detailed attributes derived from the literature findings within these factors/classes. The attribute target list (Milestone 2) then provides the basis for the “Design for Comfort” taxonomy (Milestone 3) and helps the project develop target settings (values) that will be measured during the testing phase of the C3-21 project. The attribute target list will become the core technical description of seat attributes, to be incorporated into the final comfort procedure that will be developed. The Attribute Target List and Design for Comfort Taxonomy complete the target definition process. They specify the context, markets and application (vehicle classes) for seat development. As multiple markets are addressed, the target setting requires flexibility of variables to accommodate the selected customer range. These ranges will be consecutively filled with data in forthcoming studies. The taxonomy includes how and where the targets are derived, reference points and standards, engineering and subjective data from previous studies as well as literature findings. The comfort parameters are ranked to identify which targets, variables or metrics have the biggest influence on comfort. Comfort areas included are seat kinematics (adjustability), seat geometry and pressure distribution (static comfort), seat thermal behavior and noise/vibration transmissibility (cruise comfort) and eventually material properties, design and features (seat harmony). Data from previous studies is fine tuned and will be validated in the nominated contexts and markets in follow-up dedicated studies.