887 resultados para Harp with instrumental ensemble


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: Laryngeal and tongue function was assessed in 28 patients to evaluate the presence, nature, and resolution of superior recurrent laryngeal and hypoglossal nerve damage resulting from standard open primary carotid endarterectomy (CEA). Methods. The laryngeal and tongue function in 28 patients who underwent CEA were examined prospectively with various physiologic (Aerophone II, laryngograph, tongue transducer), acoustic (Multi-Dimensional Voice Program), and perceptual speech assessments. Measures were obtained from all participants preoperatively, and at 2 weeks and at 3 months postoperatively. Results. The perceptual speech assessment indicated that the vocal quality of roughness was significantly more apparent at the 2-week postoperative assessment than preoperatively. However, by the 3-month postoperative assessment these values had returned to near preoperative levels, with no significant difference detected between preoperative and 3-month postoperative levels or between 2-week and 3-month postoperative levels. Both the instrumental assessments of laryngeal function and the acoustic assessment of vocal quality failed to identify any significant difference on any measure across the three assessment periods. Similarly, no significant impairment in tongue strength, endurance, or rate of repetitive tongue movements was detected at instrumental assessment of tongue function. Conclusions: No permanent changes to vocal or tongue function occurred in this group of participants after primary CEA. The lack of any significant long-term laryngeal or tongue dysfunction in this group suggests that the standard open CEA procedure is not associated with high rates of superior recurrent and hypoglossal nerve dysfunction, as previously believed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers a class of qubit channels for which three states are always sufficient to achieve the Holevo capacity. For these channels, it is known that there are cases where two orthogonal states are sufficient, two nonorthogonal states are required, or three states are necessary. Here a systematic theory is given which provides criteria to distinguish cases where two states are sufficient, and determine whether these two states should be orthogonal or nonorthogonal. In addition, we prove a theorem on the form of the optimal ensemble when three states are required, and present efficient methods of calculating the Holevo capacity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Primary objective: To investigate the articulatory function of a group of children with traumatic brain injury (TBI), using both perceptual and instrumental techniques. Research design: The performance of 24 children with TBI was assessed on a battery of perceptual (Frenchay Dysarthria Assessment, Assessment of Intelligibility of Dysarthric Speech and speech sample analysis) and instrumental ( lip and tongue pressure transduction systems) assessments and compared with that of 24 non-neurologically impaired children matched for age and sex. Main outcomes: Perceptual assessment identified consonant and vowel imprecision, increased length of phonemes and overall reduction in speech intelligibility, while instrumental assessment revealed significant impairment in lip and tongue function in the TBI group, with rate and pressure in repetitive lip and tongue tasks particularly impaired. Significant negative correlations were identified between the degree of deviance of perceptual articulatory features and decreased function on many non-speech measures of lip function, as well as maximum tongue pressure and fine force tongue control at 20% of maximum tongue pressure. Additionally, sub-clinical articulatory deficits were identified in the children with TBI who were non-dysarthric. Conclusion: The results of the instrumental assessment of lip and tongue function support the finding of substantial articulatory dysfunction in this group of children following TBI. Hence, remediation of articulatory function should be a therapeutic priority in these children.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Accessibility is often constructed in terms of physical accessibility. There has been little research into how the environment can accommodate the communicative limitations of people with aphasia. Communication accessibility for people with aphasia is conceptualised in this paper within the World Health Organisation's International Classification of Functioning, Disability and Health (ICF). The focus of accessibility is considered in terms of the relationship between the environment and the person with the disability. Thus: This paper synthesises the results of three Studies that examine the effectiveness of aphasia-friendly written material. Main Contribution: The first study (Rose, Worrall, & McKenna, 2003) found that aphasia-friendly formatting of written health information improves comprehension by people with aphasia, but not everyone prefers aphasia-friendly formatting. Brennan, Worrall, and McKenna (in press) found that the aphasia-friendly strategy of augmenting text with pictures, particularly ClipArt and Internet images, may be distracting rather than helpful. Finally, Egan, Worrall, and Oxenham (2004) found that the use of ail aphasia-friendly written training manual was instrumental in assisting people with aphasia to learn the Internet. Conclusion: Aphasia-friendly formatting appears to improve the accessibility of written material for people with aphasia. Caution is needed when considering the use of illustrations, particularly ClipArt and Internet images, when creating aphasia-friendly materials. A research, practice, and policy agenda for introducing aphasia-friendly formatting is proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two-dimensional (2-D) strain (epsilon(2-D)) on the basis of speckle tracking is a new technique for strain measurement. This study sought to validate epsilon(2-D) and tissue velocity imaging (TVI)based strain (epsilon(TVI)) with tagged harmonic-phase (HARP) magnetic resonance imaging (MRI). Thirty patients (mean age. 62 +/- 11 years) with known or suspected ischemic heart disease were evaluated. Wall motion (wall motion score index 1.55 +/- 0.46) was assessed by an expert observer. Three apical images were obtained for longitudinal strain (16 segments) and 3 short-axis images for radial and circumferential strain (18 segments). Radial epsilon(TVI) was obtained in the posterior wall. HARP MRI was used to measure principal strain, expressed as maximal length change in each direction. Values for epsilon(2-D), epsilon(TVI), and HARP MRI were comparable for all 3 strain directions and were reduced in dysfunctional segments. The mean difference and correlation between longitudinal epsilon(2-D) and HARP MRI (2.1 +/- 5.5%, r = 0.51, p < 0.001) were similar to those between longitudinal epsilon(TVI), and HARP MRI (1.1 +/- 6.7%, r = 0.40, p < 0.001). The mean difference and correlation were more favorable between radial epsilon(2-D) and HARP MRI (0.4 +/- 10.2%, r = 0.60, p < 0.001) than between radial epsilon(TVI), and HARP MRI (3.4 +/- 10.5%, r = 0.47, p < 0.001). For circumferential strain, the mean difference and correlation between epsilon(2-D) and HARP MRI were 0.7 +/- 5.4% and r = 0.51 (p < 0.001), respectively. In conclusion, the modest correlations of echocardiographic and HARP MRI strain reflect the technical challenges of the 2 techniques. Nonetheless, epsilon(2-D) provides a reliable tool to quantify regional function, with radial measurements being more accurate and feasible than with TVI. Unlike epsilon(TVI), epsilon(2-D) provides circumferential measurements. (c) 2006 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The physical implementation of quantum information processing is one of the major challenges of current research. In the last few years, several theoretical proposals and experimental demonstrations on a small number of qubits have been carried out, but a quantum computing architecture that is straightforwardly scalable, universal, and realizable with state-of-the-art technology is still lacking. In particular, a major ultimate objective is the construction of quantum simulators, yielding massively increased computational power in simulating quantum systems. Here we investigate promising routes towards the actual realization of a quantum computer, based on spin systems. The first one employs molecular nanomagnets with a doublet ground state to encode each qubit and exploits the wide chemical tunability of these systems to obtain the proper topology of inter-qubit interactions. Indeed, recent advances in coordination chemistry allow us to arrange these qubits in chains, with tailored interactions mediated by magnetic linkers. These act as switches of the effective qubit-qubit coupling, thus enabling the implementation of one- and two-qubit gates. Molecular qubits can be controlled either by uniform magnetic pulses, either by local electric fields. We introduce here two different schemes for quantum information processing with either global or local control of the inter-qubit interaction and demonstrate the high performance of these platforms by simulating the system time evolution with state-of-the-art parameters. The second architecture we propose is based on a hybrid spin-photon qubit encoding, which exploits the best characteristic of photons, whose mobility is exploited to efficiently establish long-range entanglement, and spin systems, which ensure long coherence times. The setup consists of spin ensembles coherently coupled to single photons within superconducting coplanar waveguide resonators. The tunability of the resonators frequency is exploited as the only manipulation tool to implement a universal set of quantum gates, by bringing the photons into/out of resonance with the spin transition. The time evolution of the system subject to the pulse sequence used to implement complex quantum algorithms has been simulated by numerically integrating the master equation for the system density matrix, thus including the harmful effects of decoherence. Finally a scheme to overcome the leakage of information due to inhomogeneous broadening of the spin ensemble is pointed out. Both the proposed setups are based on state-of-the-art technological achievements. By extensive numerical experiments we show that their performance is remarkably good, even for the implementation of long sequences of gates used to simulate interesting physical models. Therefore, the here examined systems are really promising buildingblocks of future scalable architectures and can be used for proof-of-principle experiments of quantum information processing and quantum simulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Interfaces are studied in an inhomogeneous critical state where boundary pinning is compensated with a ramped force. Sandpiles driven off the self-organized critical point provide an example of this ensemble in the Edwards-Wilkinson (EW) model of kinetic roughening. A crossover from quenched to thermal noise violates spatial and temporal translational invariances. The bulk temporal correlation functions have the effective exponents β1D∼0.88±0.03 and β2D∼0.52±0.05, while at the boundaries βb,1D/2D∼0.47±0.05. The bulk β1D is shown to be reproduced in a randomly kicked thermal EW model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Interfaces are studied in an inhomogeneous critical state where boundary pinning is compensated with a ramped force. Sandpiles driven off the self-organized critical point provide an example of this ensemble in the Edwards-Wilkinson (EW) model of kinetic roughening. A crossover from quenched to thermal noise violates spatial and temporal translational invariances. The bulk temporal correlation functions have the effective exponents β1D∼0.88±0.03 and β2D∼0.52±0.05, while at the boundaries βb,1D/2D∼0.47±0.05. The bulk β1D is shown to be reproduced in a randomly kicked thermal EW model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

DNA-binding proteins are crucial for various cellular processes, such as recognition of specific nucleotide, regulation of transcription, and regulation of gene expression. Developing an effective model for identifying DNA-binding proteins is an urgent research problem. Up to now, many methods have been proposed, but most of them focus on only one classifier and cannot make full use of the large number of negative samples to improve predicting performance. This study proposed a predictor called enDNA-Prot for DNA-binding protein identification by employing the ensemble learning technique. Experiential results showed that enDNA-Prot was comparable with DNA-Prot and outperformed DNAbinder and iDNA-Prot with performance improvement in the range of 3.97-9.52% in ACC and 0.08-0.19 in MCC. Furthermore, when the benchmark dataset was expanded with negative samples, the performance of enDNA-Prot outperformed the three existing methods by 2.83-16.63% in terms of ACC and 0.02-0.16 in terms of MCC. It indicated that enDNA-Prot is an effective method for DNA-binding protein identification and expanding training dataset with negative samples can improve its performance. For the convenience of the vast majority of experimental scientists, we developed a user-friendly web-server for enDNA-Prot which is freely accessible to the public. © 2014 Ruifeng Xu et al.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivation: Influenza A viral heterogeneity remains a significant threat due to unpredictable antigenic drift in seasonal influenza and antigenic shifts caused by the emergence of novel subtypes. Annual review of multivalent influenza vaccines targets strains of influenza A and B likely to be predominant in future influenza seasons. This does not induce broad, cross protective immunity against emergent subtypes. Better strategies are needed to prevent future pandemics. Cross-protection can be achieved by activating CD8+ and CD4+ T cells against highly-conserved regions of the influenza genome. We combine available experimental data with informatics-based immunological predictions to help design vaccines potentially able to induce cross-protective T-cells against multiple influenza subtypes. Results: To exemplify our approach we designed two epitope ensemble vaccines comprising highly-conserved and experimentally-verified immunogenic influenza A epitopes as putative non-seasonal influenza vaccines; one specifically targets the US population and the other is a universal vaccine. The USA-specific vaccine comprised 6 CD8+ T cell epitopes (GILGFVFTL, FMYSDFHFI, GMDPRMCSL, SVKEKDMTK, FYIQMCTEL, DTVNRTHQY) and 3 CD4+ epitopes (KGILGFVFTLTVPSE, EYIMKGVYINTALLN, ILGFVFTLTVPSERG). The universal vaccine comprised 8 CD8+ epitopes: (FMYSDFHFI, GILGFVFTL, ILRGSVAHK, FYIQMCTEL, ILKGKFQTA, YYLEKANKI, VSDGGPNLY, YSHGTGTGY) and the same 3 CD4+ epitopes. Our USA-specific vaccine has a population protection coverage (portion of the population potentially responsive to one or more component epitopes of the vaccine, PPC) of over 96% and 95% coverage of observed influenza subtypes. The universal vaccine has a PPC value of over 97% and 88% coverage of observed subtypes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the rapid growth of the Internet, computer attacks are increasing at a fast pace and can easily cause millions of dollar in damage to an organization. Detecting these attacks is an important issue of computer security. There are many types of attacks and they fall into four main categories, Denial of Service (DoS) attacks, Probe, User to Root (U2R) attacks, and Remote to Local (R2L) attacks. Within these categories, DoS and Probe attacks continuously show up with greater frequency in a short period of time when they attack systems. They are different from the normal traffic data and can be easily separated from normal activities. On the contrary, U2R and R2L attacks are embedded in the data portions of the packets and normally involve only a single connection. It becomes difficult to achieve satisfactory detection accuracy for detecting these two attacks. Therefore, we focus on studying the ambiguity problem between normal activities and U2R/R2L attacks. The goal is to build a detection system that can accurately and quickly detect these two attacks. In this dissertation, we design a two-phase intrusion detection approach. In the first phase, a correlation-based feature selection algorithm is proposed to advance the speed of detection. Features with poor prediction ability for the signatures of attacks and features inter-correlated with one or more other features are considered redundant. Such features are removed and only indispensable information about the original feature space remains. In the second phase, we develop an ensemble intrusion detection system to achieve accurate detection performance. The proposed method includes multiple feature selecting intrusion detectors and a data mining intrusion detector. The former ones consist of a set of detectors, and each of them uses a fuzzy clustering technique and belief theory to solve the ambiguity problem. The latter one applies data mining technique to automatically extract computer users’ normal behavior from training network traffic data. The final decision is a combination of the outputs of feature selecting and data mining detectors. The experimental results indicate that our ensemble approach not only significantly reduces the detection time but also effectively detect U2R and R2L attacks that contain degrees of ambiguous information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In certain European countries and the United States of America, canines have been successfully used in human scent identification. There is however, limited scientific knowledge on the composition of human scent and the detection mechanism that produces an alert from canines. This lack of information has resulted in successful legal challenges to human scent evidence in the courts of law. ^ The main objective of this research was to utilize science to validate the current practices of using human scent evidence in criminal cases. The goals of this study were to utilize Headspace Solid Phase Micro Extraction Gas Chromatography Mass Spectrometry (HS-SPME-GC/MS) to determine the optimum collection and storage conditions for human scent samples, to investigate whether the amount of DNA deposited upon contact with an object affects the alerts produced by human scent identification canines, and to create a prototype pseudo human scent which could be used for training purposes. ^ Hand odor samples which were collected on different sorbent materials and exposed to various environmental conditions showed that human scent samples should be stored without prolonged exposure to UVA/UVB light to allow minimal changes to the overall scent profile. Various methods of collecting human scent from objects were also investigated and it was determined that passive collection methods yields ten times more VOCs by mass than active collection methods. ^ Through the use of polymerase chain reaction (PCR) no correlation was found between the amount of DNA that was deposited upon contact with an object and the alerts that were produced by human scent identification canines. Preliminary studies conducted to create a prototype pseudo human scent showed that it is possible to produce fractions of a human scent sample which can be presented to the canines to determine whether specific fractions or the entire sample is needed to produce alerts by the human scent identification canines. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ensemble Stream Modeling and Data-cleaning are sensor information processing systems have different training and testing methods by which their goals are cross-validated. This research examines a mechanism, which seeks to extract novel patterns by generating ensembles from data. The main goal of label-less stream processing is to process the sensed events to eliminate the noises that are uncorrelated, and choose the most likely model without over fitting thus obtaining higher model confidence. Higher quality streams can be realized by combining many short streams into an ensemble which has the desired quality. The framework for the investigation is an existing data mining tool. First, to accommodate feature extraction such as a bush or natural forest-fire event we make an assumption of the burnt area (BA*), sensed ground truth as our target variable obtained from logs. Even though this is an obvious model choice the results are disappointing. The reasons for this are two: One, the histogram of fire activity is highly skewed. Two, the measured sensor parameters are highly correlated. Since using non descriptive features does not yield good results, we resort to temporal features. By doing so we carefully eliminate the averaging effects; the resulting histogram is more satisfactory and conceptual knowledge is learned from sensor streams. Second is the process of feature induction by cross-validating attributes with single or multi-target variables to minimize training error. We use F-measure score, which combines precision and accuracy to determine the false alarm rate of fire events. The multi-target data-cleaning trees use information purity of the target leaf-nodes to learn higher order features. A sensitive variance measure such as ƒ-test is performed during each node's split to select the best attribute. Ensemble stream model approach proved to improve when using complicated features with a simpler tree classifier. The ensemble framework for data-cleaning and the enhancements to quantify quality of fitness (30% spatial, 10% temporal, and 90% mobility reduction) of sensor led to the formation of streams for sensor-enabled applications. Which further motivates the novelty of stream quality labeling and its importance in solving vast amounts of real-time mobile streams generated today.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the rapid growth of the Internet, computer attacks are increasing at a fast pace and can easily cause millions of dollar in damage to an organization. Detecting these attacks is an important issue of computer security. There are many types of attacks and they fall into four main categories, Denial of Service (DoS) attacks, Probe, User to Root (U2R) attacks, and Remote to Local (R2L) attacks. Within these categories, DoS and Probe attacks continuously show up with greater frequency in a short period of time when they attack systems. They are different from the normal traffic data and can be easily separated from normal activities. On the contrary, U2R and R2L attacks are embedded in the data portions of the packets and normally involve only a single connection. It becomes difficult to achieve satisfactory detection accuracy for detecting these two attacks. Therefore, we focus on studying the ambiguity problem between normal activities and U2R/R2L attacks. The goal is to build a detection system that can accurately and quickly detect these two attacks. In this dissertation, we design a two-phase intrusion detection approach. In the first phase, a correlation-based feature selection algorithm is proposed to advance the speed of detection. Features with poor prediction ability for the signatures of attacks and features inter-correlated with one or more other features are considered redundant. Such features are removed and only indispensable information about the original feature space remains. In the second phase, we develop an ensemble intrusion detection system to achieve accurate detection performance. The proposed method includes multiple feature selecting intrusion detectors and a data mining intrusion detector. The former ones consist of a set of detectors, and each of them uses a fuzzy clustering technique and belief theory to solve the ambiguity problem. The latter one applies data mining technique to automatically extract computer users’ normal behavior from training network traffic data. The final decision is a combination of the outputs of feature selecting and data mining detectors. The experimental results indicate that our ensemble approach not only significantly reduces the detection time but also effectively detect U2R and R2L attacks that contain degrees of ambiguous information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Into the Bends of Time is a 40-minute work in seven movements for a large chamber orchestra with electronics, utilizing real-time computer-assisted processing of music performed by live musicians. The piece explores various combinations of interactive relationships between players and electronics, ranging from relatively basic processing effects to musical gestures achieved through stages of computer analysis, in which resulting sounds are crafted according to parameters of the incoming musical material. Additionally, some elements of interaction are multi-dimensional, in that they rely on the participation of two or more performers fulfilling distinct roles in the interactive process with the computer in order to generate musical material. Through processes of controlled randomness, several electronic effects induce elements of chance into their realization so that no two performances of this work are exactly alike. The piece gets its name from the notion that real-time computer-assisted processing, in which sound pressure waves are transduced into electrical energy, converted to digital data, artfully modified, converted back into electrical energy and transduced into sound waves, represents a “bending” of time.

The Bill Evans Trio featuring bassist Scott LaFaro and drummer Paul Motian is widely regarded as one of the most important and influential piano trios in the history of jazz, lauded for its unparalleled level of group interaction. Most analyses of Bill Evans’ recordings, however, focus on his playing alone and fail to take group interaction into account. This paper examines one performance in particular, of Victor Young’s “My Foolish Heart” as recorded in a live performance by the Bill Evans Trio in 1961. In Part One, I discuss Steve Larson’s theory of musical forces (expanded by Robert S. Hatten) and its applicability to jazz performance. I examine other recordings of ballads by this same trio in order to draw observations about normative ballad performance practice. I discuss meter and phrase structure and show how the relationship between the two is fixed in a formal structure of repeated choruses. I then develop a model of perpetual motion based on the musical forces inherent in this structure. In Part Two, I offer a full transcription and close analysis of “My Foolish Heart,” showing how elements of group interaction work with and against the musical forces inherent in the model of perpetual motion to achieve an unconventional, dynamic use of double-time. I explore the concept of a unified agential persona and discuss its role in imparting the song’s inherent rhetorical tension to the instrumental musical discourse.