112 resultados para invalid match
Resumo:
This paper studies single-channel speech separation, assuming unknown, arbitrary temporal dynamics for the speech signals to be separated. A data-driven approach is described, which matches each mixed speech segment against a composite training segment to separate the underlying clean speech segments. To advance the separation accuracy, the new approach seeks and separates the longest mixed speech segments with matching composite training segments. Lengthening the mixed speech segments to match reduces the uncertainty of the constituent training segments, and hence the error of separation. For convenience, we call the new approach Composition of Longest Segments, or CLOSE. The CLOSE method includes a data-driven approach to model long-range temporal dynamics of speech signals, and a statistical approach to identify the longest mixed speech segments with matching composite training segments. Experiments are conducted on the Wall Street Journal database, for separating mixtures of two simultaneous large-vocabulary speech utterances spoken by two different speakers. The results are evaluated using various objective and subjective measures, including the challenge of large-vocabulary continuous speech recognition. It is shown that the new separation approach leads to significant improvement in all these measures.
Resumo:
The blue supergiant Sher 25 is surrounded by an asymmetric, hourglass-shaped circumsteller nebula. Its structure and dynamics have been studied previously through high-resolution imaging and spectroscopy, and it appears dynamically similar to the ring structure around SN 1987A. Here, we present long-slit spectroscopy of the circumstellar nebula around Sher 25, and of the background nebula of the host cluster NGC 3603. We perform a detailed nebular abundance analysis to measure the gas-phase abundances of oxygen, nitrogen, sulphur, neon and argon. The oxygen abundance in the circumstellar nebula (12 + log O/H = 8.61 +/- 0.13 dex) is similar to that in the background nebula (8.56 +/- 0.07), suggesting that the composition of the host cluster is around solar. However, we confirm that the circumsteller nebula is very rich in nitrogen, with an abundance of 8.91 +/- 0.15, compared to the background value of 7.47 +/- 0.18. A new analysis of the stellar spectrum With the FASTWIND model atmosphere code suggests that the photospheric nitrogen and oxygen abundances in Sher 25 are consistent with the nebular results. While the nitrogen abundances are high, when compared to stellar evolutionary models, they do not unambiguously confirm that the star has undergone convective dredge-up during a previous red supergiant phase. We suggest that the more likely scenario is that the nebula was ejected from the star while it was in the blue supergiant phase. The star's initial mass was around 50 M-circle dot which is rather too high for it to have had a convective envelope stage as a red supergiant. Rotating stellar models that lead to mixing of core-processed material to the stellar surface during core H-burning can quantitatively match the stellar results with the nebula abundances.
Resumo:
We present optical (UBVRI) and near-IR (YJHK) photometry of the normal Type Ia supernova (SN) 2004S. We also present eight optical spectra and one near-IR spectrum of SN 2004S. The light curves and spectra are nearly identical to those of SN 2001el. This is the first time we have seen optical and IR light curves of two Type Ia SNe match so closely. Within the one parameter family of light curves for normal Type Ia SNe, that two objects should have such similar light curves implies that they had identical intrinsic colors and produced similar amounts of Ni-56. From the similarities of the light-curve shapes we obtain a set of extinctions as a function of wavelength that allows a simultaneous solution for the distance modulus difference of the two objects, the difference of the host galaxy extinctions, and RV. Since SN 2001el had roughly an order of magnitude more host galaxy extinction than SN 2004S, the value of R-V = 2.15(-0.22)(+0.24) pertains primarily to dust in the host galaxy of SN 2001el. We have also shown via Monte Carlo simulations that adding rest-frame J-band photometry to the complement of BVRI photometry of Type Ia SNe decreases the uncertainty in the distance modulus by a factor of 2.7. A combination of rest-frame optical and near-IR photometry clearly gives more accurate distances than using rest-frame optical photometry alone.
Resumo:
Two techniques are demonstrated to produce ultrashort pulse trains capable of quasi-phase-matching high-harmonic generation. The first technique makes use of an array of birefringent crystals and is shown to generate high-contrast pulse trains with constant pulse spacing. The second technique employs a grating-pair stretcher, a multiple-order wave plate, and a linear polarizer. Trains of up to 100 pulses are demonstrated with this technique, with almost constant inter-pulse separation. It is shown that arbitrary pulse separation can be achieved by introducing the appropriate dispersion. This principle is demonstrated by using an acousto-optic programmable dispersive filter to introduce third-and fourth-order dispersions leading to a linear and quadratic variation of the separation of pulses through the train. Chirped-pulse trains of this type may be used to quasi-phase-match high-harmonic generation in situations where the coherence length varies through the medium. (C) 2010 Optical Society of America
Resumo:
The majority of reported learning methods for Takagi-Sugeno-Kang fuzzy neural models to date mainly focus on the improvement of their accuracy. However, one of the key design requirements in building an interpretable fuzzy model is that each obtained rule consequent must match well with the system local behaviour when all the rules are aggregated to produce the overall system output. This is one of the distinctive characteristics from black-box models such as neural networks. Therefore, how to find a desirable set of fuzzy partitions and, hence, to identify the corresponding consequent models which can be directly explained in terms of system behaviour presents a critical step in fuzzy neural modelling. In this paper, a new learning approach considering both nonlinear parameters in the rule premises and linear parameters in the rule consequents is proposed. Unlike the conventional two-stage optimization procedure widely practised in the field where the two sets of parameters are optimized separately, the consequent parameters are transformed into a dependent set on the premise parameters, thereby enabling the introduction of a new integrated gradient descent learning approach. A new Jacobian matrix is thus proposed and efficiently computed to achieve a more accurate approximation of the cost function by using the second-order Levenberg-Marquardt optimization method. Several other interpretability issues about the fuzzy neural model are also discussed and integrated into this new learning approach. Numerical examples are presented to illustrate the resultant structure of the fuzzy neural models and the effectiveness of the proposed new algorithm, and compared with the results from some well-known methods.
Resumo:
Both Anderson and Gatignon and the Uppsala internationalization model see the initial mode of foreign market entry and subsequent modes of operation as unilaterally determined by multinational enterprises (MNEs) arbitraging control and risk and increasing their commitment as they gain experience in the target market. OLI and internalization models do recognize that foreign market entry requires the bundling of MNE and complementary local assets, which they call location or country-specific advantages, but implicitly assume that those assets are freely accessible to MNEs. In contrast to both of these MNE-centric views, I explicitly consider the transactional characteristics of complementary local assets and model foreign market entry as the optimal assignment of equity between their owners and MNEs. By looking at the relative efficiency of the different markets in which MNE and complementary local assets are traded, and at how these two categories of assets match, I am able to predict whether equity will be held by MNEs or by local firms, or shared between them, and whether MNEs will enter through greenfields, brownfields, or acquisitions. The bundling model I propose has interesting implications for the evolution of the MNE footprint in host countries, and for the reasons behind the emergence of Dragon MNEs.
Resumo:
In this paper, we consider the problem of tracking similar objects. We show how a mean field approach can be used to deal with interacting targets and we compare it with Markov Chain Monte Carlo (MCMC). Two mean field implementations are presented. The first one is more general and uses particle filtering. We discuss some simplifications of the base algorithm that reduce the computation time. The second one is based on suitable Gaussian approximations of probability densities that lead to a set of self-consistent equations for the means and covariances. These equations give the Kalman solution if there is no interaction. Experiments have been performed on two kinds of sequences. The first kind is composed of a single long sequence of twenty roaming ants and was previously analysed using MCMC. In this case, our mean field algorithms obtain substantially better results. The second kind corresponds to selected sequences of a football match in which the interaction avoids tracker coalescence in situations where independent trackers fail.
Resumo:
Increasingly infrastructure providers are supplying the cloud marketplace with storage and on-demand compute resources to host cloud applications. From an application user's point of view, it is desirable to identify the most appropriate set of available resources on which to execute an application. Resource choice can be complex and may involve comparing available hardware specifications, operating systems, value-added services, such as network configuration or data replication, and operating costs, such as hosting cost and data throughput. Providers' cost models often change and new commodity cost models, such as spot pricing, have been introduced to offer significant savings. In this paper, a software abstraction layer is used to discover infrastructure resources for a particular application, across multiple providers, by using a two-phase constraints-based approach. In the first phase, a set of possible infrastructure resources are identified for a given application. In the second phase, a heuristic is used to select the most appropriate resources from the initial set. For some applications a cost-based heuristic is most appropriate; for others a performance-based heuristic may be used. A financial services application and a high performance computing application are used to illustrate the execution of the proposed resource discovery mechanism. The experimental result shows the proposed model could dynamically select an appropriate set of resouces that match the application's requirements.
Resumo:
This invention relates to electronic circuit packages designed to hold high frequency circuits operating particularly, but not exclusively, in the microwave, millimeter wave, and sub-millimeter wave bands. The invention provides a package incorporating a cavity in a material for containment of the circuits, wherein the package further incorporates at least one conductive surface mounted on an inner surface extending into the cavity, the conductivity thereof being adapted to be at least partially absorbent to electromagnetic radiation. The conductive surface according to the present invention will tend to attenuate electromagnetic radiation present within the cavity, and so help to prevent undesired coupling from one point to another within the cavity. The conductivity of the conductive material is preferably arranged to match the impedance of the radiation mode estimated or computed to be present within the cavity.
Resumo:
Late age-related maculopathy (ARM) is responsible for the majority of blind registrations in the Western world among persons over 50 years of age. It has devastating effects on quality of life and independence and is becoming a major public health concern. Current treatment options are limited and most aim to slow progression rather than restore vision; therefore, early detection to identify those patients most suitable for these interventions is essential. In this work, we review the literature encompassing the investigation of visual function in ARM in order to highlight those visual function parameters which are affected very early in the disease process. We pay particular attention to measures of acuity, contrast sensitivity (CS), cone function, electrophysiology, visual adaptation, central visual field sensitivity and metamorphopsia. We also consider the impact of bilateral late ARM on visual function as well as the relationship between measures of vision function and self-reported visual functioning. Much interest has centred on the identification of functional changes which may predict progression to neovascular disease; therefore, we outline the longitudinal studies, which to date have reported dark-adaptation time, short-wavelength cone sensitivity, colour-match area effect, dark-adapted foveal sensitivity, foveal flicker sensitivity, slow recovery from glare and slower foveal electroretinogram implicit time as functional risk factors for the development of neovascular disease. Despite progress in this area, we emphasise the need for longitudinal studies designed in light of developments in disease classification and retinal imaging, which would ensure the correct classification of cases and controls, and provide increased understanding of the natural course and progression of the disease and further elucidate the structure-function relationships in this devastating disorder.
Resumo:
In this paper, a data driven orthogonal basis function approach is proposed for non-parametric FIR nonlinear system identification. The basis functions are not fixed a priori and match the structure of the unknown system automatically. This eliminates the problem of blindly choosing the basis functions without a priori structural information. Further, based on the proposed basis functions, approaches are proposed for model order determination and regressor selection along with their theoretical justifications. © 2008 IEEE.
Resumo:
INTRODUCTION: Breaching the skin's stratum corneum barrier raises the possibility of the administration of vaccines, gene vectors, antibodies and even nanoparticles, all of which have at least their initial effect on populations of skin cells. AREAS COVERED: Intradermal vaccine delivery holds enormous potential for improved therapeutic outcomes for patients, particularly those in the developing world. Various vaccine-delivery strategies have been employed, which are discussed in this review. The importance of cutaneous immunobiology on the effect produced by microneedle-mediated intradermal vaccination is also discussed. EXPERT OPINION: Microneedle-mediated vaccines hold enormous potential for patient benefit. However, in order for microneedle vaccine strategies to fulfill their potential, the proportion of an immune response that is due to the local action of delivered vaccines on skin antigen-presenting cells, and what is due to a systemic effect from vaccines reaching the systemic circulation, must be determined. Moreover, industry will need to invest significantly in new equipment and instrumentation in order to mass-produce microneedle vaccines consistently. Finally, microneedles will need to demonstrate consistent dose delivery across patient groups and match this to reliable immune responses before they will replace tried-and-tested needle-and-syringe-based approaches.
Resumo:
Tuning and stacking approaches have been used to compile non-annually resolved peatland palaeo-water table records in several studies. This approach has been proposed as a potential way forward to overcome the chronological problems that beset the correlation of records and may help in the upscaling of palaeoclimate records for climate model-data comparisons. This paper investigates the uncertainties in this approach using a published water table compilation from Northern Ireland. Firstly, three plausible combinations of chronological match points are used to assess the variability of the reconstructions. It is apparent that even with markedly different match point combinations, the compilations are highly similar, especially when a 100-year running mean line is used for interpretation. Secondly, sample-specific reconstruction errors are scaled in relation to the standardised water table units and illustrated on the compiled reconstruction. Thirdly, the total chronological errors for each reconstruction are calculated using Bayesian age-modelling software. Although tuning and stacking approaches may be suitable for compiling peat-based palaeoclimate records, it is important that the reconstruction and chronological errors are acknowledged and clearly illustrated in future studies. The tuning of peat-based proxy climate records is based on a potentially flawed assumption that events are synchronous between sites. © 2011 Elsevier Ltd and INQUA.
Resumo:
The development of an automated system for the quality assessment of aerodrome ground lighting (AGL), in accordance with associated standards and recommendations, is presented. The system is composed of an image sensor, placed inside the cockpit of an aircraft to record images of the AGL during a normal descent to an aerodrome. A model-based methodology is used to ascertain the optimum match between a template of the AGL and the actual image data in order to calculate the position and orientation of the camera at the instant the image was acquired. The camera position and orientation data are used along with the pixel grey level for each imaged luminaire, to estimate a value for the luminous intensity of a given luminaire. This can then be compared with the expected brightness for that luminaire to ensure it is operating to the required standards. As such, a metric for the quality of the AGL pattern is determined. Experiments on real image data is presented to demonstrate the application and effectiveness of the system.
Resumo:
Predicting how species distributions might shift as global climate changes is fundamental to the successful adaptation of conservation policy. An increasing number of studies have responded to this challenge by using climate envelopes, modeling the association between climate variables and species distributions. However, it is difficult to quantify how well species actually match climate. Here, we use null models to show that species-climate associations found by climate envelope methods are no better than chance for 68 of 100 European bird species. In line with predictions, we demonstrate that the species with distribution limits determined by climate have more northerly ranges. We conclude that scientific studies and climate change adaptation policies based on the indiscriminate use of climate envelope methods irrespective of species sensitivity to climate may be misleading and in need of revision.