950 resultados para M2 Segment


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In public venues, crowd size is a key indicator of crowd safety and stability. In this paper we propose a crowd counting algorithm that uses tracking and local features to count the number of people in each group as represented by a foreground blob segment, so that the total crowd estimate is the sum of the group sizes. Tracking is employed to improve the robustness of the estimate, by analysing the history of each group, including splitting and merging events. A simplified ground truth annotation strategy results in an approach with minimal setup requirements that is highly accurate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Islanded operation, protection, reclosing and arc extinguishing are some of the challenging issues related to the connection of converter interfaced distributed generators (DGs) into a distribution network. The isolation of upstream faults in grid connected mode and fault detection in islanded mode using overcurrent devices are difficult. In the event of an arc fault, all DGs must be disconnected in order to extinguish the arc. Otherwise, they will continue to feed the fault, thus sustaining the arc. However, the system reliability can be increased by maximising the DG connectivity to the system: therefore, the system protection scheme must ensure that only the faulted segment is removed from the feeder. This is true even in the case of a radial feeder as the DG can be connected at various points along the feeder. In this paper, a new relay scheme is proposed which, along with a novel current control strategy for converter interfaced DGs, can isolate permanent and temporary arc faults. The proposed protection and control scheme can even coordinate with reclosers. The results are validated through PSCAD/EMTDC simulation and MATLAB calculations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents an original approach to parametric speech coding at rates below 1 kbitsjsec, primarily for speech storage applications. Essential processes considered in this research encompass efficient characterization of evolutionary configuration of vocal tract to follow phonemic features with high fidelity, representation of speech excitation using minimal parameters with minor degradation in naturalness of synthesized speech, and finally, quantization of resulting parameters at the nominated rates. For encoding speech spectral features, a new method relying on Temporal Decomposition (TD) is developed which efficiently compresses spectral information through interpolation between most steady points over time trajectories of spectral parameters using a new basis function. The compression ratio provided by the method is independent of the updating rate of the feature vectors, hence allows high resolution in tracking significant temporal variations of speech formants with no effect on the spectral data rate. Accordingly, regardless of the quantization technique employed, the method yields a high compression ratio without sacrificing speech intelligibility. Several new techniques for improving performance of the interpolation of spectral parameters through phonetically-based analysis are proposed and implemented in this research, comprising event approximated TD, near-optimal shaping event approximating functions, efficient speech parametrization for TD on the basis of an extensive investigation originally reported in this thesis, and a hierarchical error minimization algorithm for decomposition of feature parameters which significantly reduces the complexity of the interpolation process. Speech excitation in this work is characterized based on a novel Multi-Band Excitation paradigm which accurately determines the harmonic structure in the LPC (linear predictive coding) residual spectra, within individual bands, using the concept 11 of Instantaneous Frequency (IF) estimation in frequency domain. The model yields aneffective two-band approximation to excitation and computes pitch and voicing with high accuracy as well. New methods for interpolative coding of pitch and gain contours are also developed in this thesis. For pitch, relying on the correlation between phonetic evolution and pitch variations during voiced speech segments, TD is employed to interpolate the pitch contour between critical points introduced by event centroids. This compresses pitch contour in the ratio of about 1/10 with negligible error. To approximate gain contour, a set of uniformly-distributed Gaussian event-like functions is used which reduces the amount of gain information to about 1/6 with acceptable accuracy. The thesis also addresses a new quantization method applied to spectral features on the basis of statistical properties and spectral sensitivity of spectral parameters extracted from TD-based analysis. The experimental results show that good quality speech, comparable to that of conventional coders at rates over 2 kbits/sec, can be achieved at rates 650-990 bits/sec.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: To determine whether bifocal and prismatic bifocal spectacles could control myopia in children with high rates of myopic progression. ---------- Methods: This was a randomized controlled clinical trial. One hundred thirty-five (73 girls and 62 boys) myopic Chinese Canadian children (myopia of 1.00 diopters [D]) with myopic progression of at least 0.50 D in the preceding year were randomly assigned to 1 of 3 treatments: (1) single-vision lenses (n = 41), (2) +1.50-D executive bifocals (n = 48), or (3) +1.50-D executive bifocals with a 3–prism diopters base-in prism in the near segment of each lens (n = 46). ---------- Main Outcome Measures: Myopic progression measured by an automated refractor under cycloplegia and increase in axial length (secondary) measured by ultrasonography at 6-month intervals for 24 months. Only the data of the right eye were used. ---------- Results: Of the 135 children (mean age, 10.29 years [SE, 0.15 years]; mean visual acuity, –3.08 D [SE, 0.10 D]), 131 (97%) completed the trial after 24 months. Myopic progression averaged –1.55 D (SE, 0.12 D) for those who wore single-vision lenses, –0.96 D (SE, 0.09 D) for those who wore bifocals, and –0.70 D (SE, 0.10 D) for those who wore prismatic bifocals. Axial length increased an average of 0.62 mm (SE, 0.04 mm), 0.41 mm (SE, 0.04 mm), and 0.41 mm (SE, 0.05 mm), respectively. The treatment effect of bifocals (0.59 D) and prismatic bifocals (0.85 D) was significant (P < .001) and both bifocal groups had less axial elongation (0.21 mm) than the single-vision lens group (P < .001). ---------- Conclusions: Bifocal lenses can moderately slow myopic progression in children with high rates of progression after 24 months.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

My research investigates why nouns are learned disproportionately more frequently than other kinds of words during early language acquisition (Gentner, 1982; Gleitman, et al., 2004). This question must be considered in the context of cognitive development in general. Infants have two major streams of environmental information to make meaningful: perceptual and linguistic. Perceptual information flows in from the senses and is processed into symbolic representations by the primitive language of thought (Fodor, 1975). These symbolic representations are then linked to linguistic input to enable language comprehension and ultimately production. Yet, how exactly does perceptual information become conceptualized? Although this question is difficult, there has been progress. One way that children might have an easier job is if they have structures that simplify the data. Thus, if particular sorts of perceptual information could be separated from the mass of input, then it would be easier for children to refer to those specific things when learning words (Spelke, 1990; Pylyshyn, 2003). It would be easier still, if linguistic input was segmented in predictable ways (Gentner, 1982; Gleitman, et al., 2004) Unfortunately the frequency of patterns in lexical or grammatical input cannot explain the cross-cultural and cross-linguistic tendency to favor nouns over verbs and predicates. There are three examples of this failure: 1) a wide variety of nouns are uttered less frequently than a smaller number of verbs and yet are learnt far more easily (Gentner, 1982); 2) word order and morphological transparency offer no insight when you contrast the sentence structures and word inflections of different languages (Slobin, 1973) and 3) particular language teaching behaviors (e.g. pointing at objects and repeating names for them) have little impact on children's tendency to prefer concrete nouns in their first fifty words (Newport, et al., 1977). Although the linguistic solution appears problematic, there has been increasing evidence that the early visual system does indeed segment perceptual information in specific ways before the conscious mind begins to intervene (Pylyshyn, 2003). I argue that nouns are easier to learn because their referents directly connect with innate features of the perceptual faculty. This hypothesis stems from work done on visual indexes by Zenon Pylyshyn (2001, 2003). Pylyshyn argues that the early visual system (the architecture of the "vision module") segments perceptual data into pre-conceptual proto-objects called FINSTs. FINSTs typically correspond to physical things such as Spelke objects (Spelke, 1990). Hence, before conceptualization, visual objects are picked out by the perceptual system demonstratively, like a finger pointing indicating ‘this’ or ‘that’. I suggest that this primitive system of demonstration elaborates on Gareth Evan's (1982) theory of nonconceptual content. Nouns are learnt first because their referents attract demonstrative visual indexes. This theory also explains why infants less often name stationary objects such as plate or table, but do name things that attract the focal attention of the early visual system, i.e., small objects that move, such as ‘dog’ or ‘ball’. This view leaves open the question how blind children learn words for visible objects and why children learn category nouns (e.g. 'dog'), rather than proper nouns (e.g. 'Fido') or higher taxonomic distinctions (e.g. 'animal').

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Safety interventions (e.g., median barriers, photo enforcement) and road features (e.g., median type and width) can influence crash severity, crash frequency, or both. Both dimensions—crash frequency and crash severity—are needed to obtain a full accounting of road safety. Extensive literature and common sense both dictate that crashes are not created equal, with fatalities costing society more than 1,000 times the cost of property damage crashes on average. Despite this glaring disparity, the profession has not unanimously embraced or successfully defended a nonarbitrary severity weighting approach for analyzing safety data and conducting safety analyses. It is argued here that the two dimensions (frequency and severity) are made available by intelligently and reliably weighting crash frequencies and converting all crashes to property-damage-only crash equivalents (PDOEs) by using comprehensive societal unit crash costs. This approach is analogous to calculating axle load equivalents in the prediction of pavement damage: for instance, a 40,000-lb truck causes 4,025 times more stress than does a 4,000-lb car and so simply counting axles is not sufficient. Calculating PDOEs using unit crash costs is the most defensible and nonarbitrary weighting scheme, allows for the simple incorporation of severity and frequency, and leads to crash models that are sensitive to factors that affect crash severity. Moreover, using PDOEs diminishes the errors introduced by underreporting of less severe crashes—an added benefit of the PDOE analysis approach. The method is illustrated with rural road segment data from South Korea (which in practice would develop PDOEs with Korean crash cost data).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Predicting safety on roadways is standard practice for road safety professionals and has a corresponding extensive literature. The majority of safety prediction models are estimated using roadway segment and intersection (microscale) data, while more recently efforts have been undertaken to predict safety at the planning level (macroscale). Safety prediction models typically include roadway, operations, and exposure variables—factors known to affect safety in fundamental ways. Environmental variables, in particular variables attempting to capture the effect of rain on road safety, are difficult to obtain and have rarely been considered. In the few cases weather variables have been included, historical averages rather than actual weather conditions during which crashes are observed have been used. Without the inclusion of weather related variables researchers have had difficulty explaining regional differences in the safety performance of various entities (e.g. intersections, road segments, highways, etc.) As part of the NCHRP 8-44 research effort, researchers developed PLANSAFE, or planning level safety prediction models. These models make use of socio-economic, demographic, and roadway variables for predicting planning level safety. Accounting for regional differences - similar to the experience for microscale safety models - has been problematic during the development of planning level safety prediction models. More specifically, without weather related variables there is an insufficient set of variables for explaining safety differences across regions and states. Furthermore, omitted variable bias resulting from excluding these important variables may adversely impact the coefficients of included variables, thus contributing to difficulty in model interpretation and accuracy. This paper summarizes the results of an effort to include weather related variables, particularly various measures of rainfall, into accident frequency prediction and the prediction of the frequency of fatal and/or injury degree of severity crash models. The purpose of the study was to determine whether these variables do in fact improve overall goodness of fit of the models, whether these variables may explain some or all of observed regional differences, and identifying the estimated effects of rainfall on safety. The models are based on Traffic Analysis Zone level datasets from Michigan, and Pima and Maricopa Counties in Arizona. Numerous rain-related variables were found to be statistically significant, selected rain related variables improved the overall goodness of fit, and inclusion of these variables reduced the portion of the model explained by the constant in the base models without weather variables. Rain tends to diminish safety, as expected, in fairly complex ways, depending on rain frequency and intensity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Speeding is recognized as a major contributing factor in traffic crashes. In order to reduce speed-related crashes, the city of Scottsdale, Arizona implemented the first fixed-camera photo speed enforcement program (SEP) on a limited access freeway in the US. The 9-month demonstration program spanning from January 2006 to October 2006 was implemented on a 6.5 mile urban freeway segment of Arizona State Route 101 running through Scottsdale. This paper presents the results of a comprehensive analysis of the impact of the SEP on speeding behavior, crashes, and the economic impact of crashes. The impact on speeding behavior was estimated using generalized least square estimation, in which the observed speeds and the speeding frequencies during the program period were compared to those during other periods. The impact of the SEP on crashes was estimated using 3 evaluation methods: a before-and-after (BA) analysis using a comparison group, a BA analysis with traffic flow correction, and an empirical Bayes BA analysis with time-variant safety. The analysis results reveal that speeding detection frequencies (speeds> or =76 mph) increased by a factor of 10.5 after the SEP was (temporarily) terminated. Average speeds in the enforcement zone were reduced by about 9 mph when the SEP was implemented, after accounting for the influence of traffic flow. All crash types were reduced except rear-end crashes, although the estimated magnitude of impact varies across estimation methods (and their corresponding assumptions). When considering Arizona-specific crash related injury costs, the SEP is estimated to yield about $17 million in annual safety benefits.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The detached housing scheme is a unique and exclusive segment of the residential property market in Malaysia. Generally, the product is expensive and for many Malaysians who can afford them, owning a detached house is a once in a lifetime opportunity. In spite of this, most of the owners fail to fully comprehend the specific need of this type of housing scheme, increasing the risk of it being a problematic project. Unlike other types of pre-designed ‘mass housing’ schemes, the detached housing scheme may be built specifically to cater the needs and demands of its owner. Therefore, maximum owner participation is vital as the development progresses to guarantee the success of the project. In addition, due to it’s unique design the house would have to individually comply with the requirements and regulations of relevant authorities. Failure of owner to recognise this will result in delays, fines and penalties, disputes and ultimately cost overruns. These circumstances highlight the need for a model to guide the owner through the entire development process of a detached house. Therefore, this research aims to develop a model for a successful detached housing development in Malaysia through maximising owner participation during it’s various development stages. To achieve this, questionnaire surveys and case studies methods shall be employed to acquire the detached housing owners’ experiences in developing their detached houses in Malaysia. Relevant statistical tools shall be applied to analyse the responses. The results gained from this study shall be synthesised into a model of successful detached housing development for the reference of future detached housing owners in Malaysia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A composite line source emission (CLSE) model was developed to specifically quantify exposure levels and describe the spatial variability of vehicle emissions in traffic interrupted microenvironments. This model took into account the complexity of vehicle movements in the queue, as well as different emission rates relevant to various driving conditions (cruise, decelerate, idle and accelerate), and it utilised multi-representative segments to capture the accurate emission distribution for real vehicle flow. Hence, this model was able to quickly quantify the time spent in each segment within the considered zone, as well as the composition and position of the requisite segments based on the vehicle fleet information, which not only helped to quantify the enhanced emissions at critical locations, but it also helped to define the emission source distribution of the disrupted steady flow for further dispersion modelling. The model then was applied to estimate particle number emissions at a bi-directional bus station used by diesel and compressed natural gas fuelled buses. It was found that the acceleration distance was of critical importance when estimating particle number emission, since the highest emissions occurred in sections where most of the buses were accelerating and no significant increases were observed at locations where they idled. It was also shown that emissions at the front end of the platform were 43 times greater than at the rear of the platform. Although the CLSE model is intended to be applied in traffic management and transport analysis systems for the evaluation of exposure, as well as the simulation of vehicle emissions in traffic interrupted microenvironments, the bus station model can also be used for the input of initial source definitions in future dispersion models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Within a surveillance video, occlusions are commonplace, and accurately resolving these occlusions is key when seeking to accurately track objects. The challenge of accurately segmenting objects is further complicated by the fact that within many real-world surveillance environments, the objects appear very similar. For example, footage of pedestrians in a city environment will consist of many people wearing dark suits. In this paper, we propose a novel technique to segment groups and resolve occlusions using optical flow discontinuities. We demonstrate that the ratio of continuous to discontinuous pixels within a region can be used to locate the overlapping edges, and incorporate this into an object tracking framework. Results on a portion of the ETISEO database show that the proposed algorithm results in improved tracking performance overall, and improved tracking within occlusions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Myosin is believed to act as the molecular motor for many actin-based motility processes in eukaryotes. It is becoming apparent that a single species may possess multiple myosin isoforms, and at least seven distinct classes of myosin have been identified from studies of animals, fungi, and protozoans. The complexity of the myosin heavy-chain gene family in higher plants was investigated by isolating and characterizing myosin genomic and cDNA clones from Arabidopsis thaliana. Six myosin-like genes were identified from three polymerase chain reaction (PCR) products (PCR1, PCR11, PCR43) and three cDNA clones (ATM2, MYA2, MYA3). Sequence comparisons of the deduced head domains suggest that these myosins are members of two major classes. Analysis of the overall structure of the ATM2 and MYA2 myosins shows that they are similar to the previously-identified ATM1 and MYA1 myosins, respectively. The MYA3 appears to possess a novel tail domain, with five IQ repeats, a six-member imperfect repeat, and a segment of unique sequence. Northern blot analyses indicate that some of the Arabidopsis myosin genes are preferentially expressed in different plant organs. Combined with previous studies, these results show that the Arabidopsis genome contains at least eight myosin-like genes representing two distinct classes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective Uterine Papillary Serous Carcinoma (UPSC) is uncommon and accounts for less than 5% of all uterine cancers. Therefore the majority of evidence about the benefits of adjuvant treatment comes from retrospective case series. We conducted a prospective multi-centre non-randomized phase 2 clinical trial using four cycles of adjuvant paclitaxel plus carboplatin chemotherapy followed by pelvic radiotherapy, in order to evaluate the tolerability and safety of this approach. Methods This trial enrolled patients with newly diagnosed, previously untreated patients with stage 1b-4 (FIGO-1988) UPSC with a papillary serous component of at least 30%. Paclitaxel (175 mg/m2) and carboplatin (AUC 6) were administered on day 1 of each 3-week cycle for 4 cycles. Chemotherapy was followed by external beam radiotherapy to the whole pelvis (50.4 Gy over 5.5 weeks). Completion and toxicity of treatment (Common Toxicity Criteria, CTC) and quality of life measures were the primary outcome indicators. Results Twenty-nine of 31 patients completed treatment as planned. Dose reduction was needed in 9 patients (29%), treatment delay in 7 (23%), and treatment cessation in 2 patients (6.5%). Hematologic toxicity, grade 3 or 4 occurred in 19% (6/31) of patients. Patients' self-reported quality of life remained stable throughout treatment. Thirteen of the 29 patients with stages 1–3 disease (44.8%) recurred (average follow up 28.1 months, range 8–60 months). Conclusion This multimodal treatment is feasible, safe and tolerated reasonably well and would be suitable for use in multi-institutional prospective randomized clinical trials incorporating novel therapies in patients with UPSC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a multiscale study using the coupled Meshless technique/Molecular Dynamics (M2) for exploring the deformation mechanism of mono-crystalline metal (focus on copper) under uniaxial tension. In M2, an advanced transition algorithm using transition particles is employed to ensure the compatibility of both displacements and their gradients, and an effective local quasi-continuum approach is also applied to obtain the equivalent continuum strain energy density based on the atomistic poentials and Cauchy-Born rule. The key parameters used in M2 are firstly investigated using a benchmark problem. Then M2 is applied to the multiscale simulation for a mono-crystalline copper bar. It has found that the mono-crystalline copper has very good elongation property, and the ultimate strength and Young's modulus are much higher than those obtained in macro-scale.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a previous chapter (Dean and Kavanagh, Chapter 37), the authors made a case for applying low intensity (LI) cognitive behaviour therapy (CBT) to people with serious mental illness (SMI). As in other populations, LI CBT interventions typically deal with circumscribed problems or behaviours. LI CBT retains an emphasis on self-management, has restricted content and segment length, and does not necessarily require extensive CBT training. In applying these interventions to SMI, adjustments may be needed to address cognitive and symptomatic difficulties often faced by these groups. What may take a single session in a less affected population may require several sessions or a thematic application of the strategy within case management. In some cases, the LI CBT may begin to appear more like a high-intensity (HI) intervention, albeit simple and with many LI CBT characteristics still retained. So, if goal setting were introduced in one or two sessions, it could clearly be seen as an LI intervention. When applied to several different situations and across many sessions, it may be indistinguishable from a simple HI treatment, even if it retains the same format and is effectively applied by a practitioner with limited CBT training. ----- ----- In some ways, LI CBT should be well suited to case management of patients with SMI. treating staff typically have heavy workloads, and find it difficult to apply time-consuming treatments (Singh et al. 2003). LI CBT may allow provision of support to greater numbers of service users, and allow staff to spend more time on those who need intensive and sustained support. However, the introduction of any change in practice has to address significant challenges, and LI CBT is no exception. ----- ----- Many of the issues that we face in applying LI CBT to routine case management in a mnetal health service and their potential solutions are essentially the same as in a range of other problem domains (Turner and Sanders 2006)- and, indeed, are similar to those in any adoption of innovation (Rogers 2003). Over the last 20 years, several commentators have described barriers to implementing evidence-based innovations in mental health services (Corrigan et al. 1992; Deane et al. 2006; Kavanagh et al. 1993). The aim of the current chapter is to present a cognitive behavioural conceptualisation of problems and potential solutions for dissemination of LI CBT.