947 resultados para Code-switching
Resumo:
As wavelength-division multiplexing (WDM) evolves towards practical applications in optical transport networks, waveband switching (WBS) has been introduced to cut down the operational costs and to reduce the complexities and sizes of network components, e.g., optical cross-connects (OXCs). This paper considers the routing, wavelength assignment and waveband assignment (RWWBA) problem in a WDM network supporting mixed waveband and wavelength switching. First, the techniques supporting waveband switching are studied, where a node architecture enabling mixed waveband and wavelength switching is proposed. Second, to solve the RWWBA problem with reduced switching costs and improved network throughput, the cost savings and call blocking probabilities along intermediate waveband-routes are analyzed. Our analysis reveals some important insights about the cost savings and call blocking probability in relation to the fiber capacity, the candidate path, and the traffic load. Third, based on our analysis, an online integrated intermediate WBS algorithm (IIWBS) is proposed. IIWBS determines the waveband switching route for a call along its candidate path according to the node connectivity, the link utilization, and the path length information. In addition, the IIWBS algorithm is adaptive to real network applications under dynamic traffic requests. Finally, our simulation results show that IIWBS outperforms a previous intermediate WBS algorithm and RWA algorithms in terms of network throughput and cost efficiency.
Resumo:
Springer et al. (2003) contend that sequential declines occurred in North Pacific populations of harbor and fur seals, Steller sea lions, and sea otters. They hypothesize that these were due to increased predation by killer whales, when industrial whaling’s removal of large whales as a supposed primary food source precipitated a prey switch. Using a regional approach, we reexamined whale catch data, killer whale predation observations, and the current biomass and trends of potential prey, and found little support for the prey-switching hypothesis. Large whale biomass in the Bering Sea did not decline as much as suggested by Springer et al., and much of the reduction occurred 50–100 yr ago, well before the declines of pinnipeds and sea otters began; thus, the need to switch prey starting in the 1970s is doubtful. With the sole exception that the sea otter decline followed the decline of pinnipeds, the reported declines were not in fact sequential. Given this, it is unlikely that a sequential megafaunal collapse from whales to sea otters occurred. The spatial and temporal patterns of pinniped and sea otter population trends are more complex than Springer et al. suggest, and are often inconsistent with their hypothesis. Populations remained stable or increased in many areas, despite extensive historical whaling and high killer whale abundance. Furthermore, observed killer whale predation has largely involved pinnipeds and small cetaceans; there is little evidence that large whales were ever a major prey item in high latitudes. Small cetaceans (ignored by Springer et al.) were likely abundant throughout the period. Overall, we suggest that the Springer et al. hypothesis represents a misleading and simplistic view of events and trophic relationships within this complex marine ecosystem.
Resumo:
Many tools and techniques for addressing software maintenance problems rely on code coverage information. Often, this coverage information is gathered for a specific version of a software system, and then used to perform analyses on subsequent versions of that system without being recalculated. As a software system evolves, however, modifications to the software alter the software’s behavior on particular inputs, and code coverage information gathered on earlier versions of a program may not accurately reflect the coverage that would be obtained on later versions. This discrepancy may affect the success of analyses dependent on code coverage information. Despite the importance of coverage information in various analyses, in our search of the literature we find no studies specifically examining the impact of software evolution on code coverage information. Therefore, we conducted empirical studies to examine this impact. The results of our studies suggest that even relatively small modifications can greatly affect code coverage information, and that the degree of impact of change on coverage may be difficult to predict.
Resumo:
Not long ago, most software was written by professional programmers, who could be presumed to have an interest in software engineering methodologies and in tools and techniques for improving software dependability. Today, however, a great deal of software is written not by professionals but by end-users, who create applications such as multimedia simulations, dynamic web pages, and spreadsheets. Applications such as these are often used to guide important decisions or aid in important tasks, and it is important that they be sufficiently dependable, but evidence shows that they frequently are not. For example, studies have shown that a large percentage of the spreadsheets created by end-users contain faults. Despite such evidence, until recently, relatively little research had been done to help end-users create more dependable software. We have been working to address this problem by finding ways to provide at least some of the benefits of formal software engineering techniques to end-user programmers. In this talk, focusing on the spreadsheet application paradigm, I present several of our approaches, focusing on methodologies that utilize source-code-analysis techniques to help end-users build more dependable spreadsheets. Behind the scenes, our methodologies use static analyses such as dataflow analysis and slicing, together with dynamic analyses such as execution monitoring, to support user tasks such as validation and fault localization. I show how, to accommodate the user base of spreadsheet languages, an interface to these methodologies can be provided in a manner that does not require an understanding of the theory behind the analyses, yet supports the interactive, incremental process by which spreadsheets are created. Finally, I present empirical results gathered in the use of our methodologies that highlight several costs and benefits trade-offs, and many opportunities for future work.
Resumo:
Due to the lack of optical random access memory, optical fiber delay line (FDL) is currently the only way to implement optical buffering. Feed-forward and feedback are two kinds of FDL structures in optical buffering. Both have advantages and disadvantages. In this paper, we propose a more effective hybrid FDL architecture that combines the merits of both schemes. The core of this switch is the arrayed waveguide grating (AWG) and the tunable wavelength converter (TWC). It requires smaller optical device sizes and fewer wavelengths and has less noise than feedback architecture. At the same time, it can facilitate preemptive priority routing which feed-forward architecture cannot support. Our numerical results show that the new switch architecture significantly reduces packet loss probability.
Resumo:
In this paper, we consider the problem of topology design for optical networks. We investigate the problem of selecting switching sites to minimize total cost of the optical network. The cost of an optical network can be expressed as a sum of three main factors: the site cost, the link cost, and the switch cost. To the best of our knowledge, this problem has not been studied in its general form as investigated in this paper. We present a mixed integer quadratic programming (MIQP) formulation of the problem to find the optimal value of the total network cost. We also present an efficient heuristic to approximate the solution in polynomial time. The experimental results show good performance of the heuristic. The value of the total network cost computed by the heuristic varies within 2% to 21% of its optimal value in the experiments with 10 nodes. The total network cost computed by the heuristic for 51% of the experiments with 10 node network topologies varies within 8% of its optimal value. We also discuss the insight gained from our experiments.
Resumo:
Purpose: This prospective randomized matched-pair controlled trial aimed to evaluate marginal bone levels and soft tissue alterations at implants restored according to the platform-switching concept with a new inward-inclined platform and compare them with external-hexagon implants. Materials and Methods: Traditional external-hexagon (control group) implants and inward-inclined platform implants (test group), all with the same implant body geometry and 13 mm in length, were inserted in a standardized manner in the posterior maxillae of 40 patients. Radiographic bone levels were measured by two independent examiners after 6, 12, and 18 months of prosthetic loading. Buccal soft tissue height was measured at the time of abutment connection and 18 months later. Results: After 18 months of loading, all 80 implants were clinically osseointegrated in the 40 participating patients. Radiographic evaluation showed mean bone losses of 0.5 +/- 0.1 mm (range, 0.3 to 0.7 mm) and 1.6 +/- 0.3 mm (range, 1.1 to 2.2 mm) for test and control implants, respectively. Soft tissue height showed a significant mean decrease of 2.4 mm in the control group, compared to 0.6 mm around the test implants. Conclusions: After 18 months, significantly greater bone loss was observed at implants restored according to the conventional external-hexagon protocol compared to the platform-switching concept. In addition, decreased soft tissue height was associated with the external-hexagon implants versus the platform-switched implants. INT J ORAL MAXILLOFAC IMPLANTS 2012;27:927-934.
Resumo:
This new and general method here called overflow current switching allows a fast, continuous, and smooth transition between scales in wide-range current measurement systems, like electrometers. This is achieved, using a hydraulic analogy, by diverting only the overflow current, such that no slow element is forced to change its state during the switching. As a result, this approach practically eliminates the long dead time in low-current (picoamperes) switching. Similar to a logarithmic scale, a composition of n adjacent linear scales, like a segmented ruler, measures the current. The use of a linear wide-range system based on this technique assures fast and continuous measurement in the entire range, without blind regions during transitions and still holding suitable accuracy for many applications. A full mathematical development of the method is given. Several computer realistic simulations demonstrated the viability of the technique.
Resumo:
In this paper, we perform a thorough analysis of a spectral phase-encoded time spreading optical code division multiple access (SPECTS-OCDMA) system based on Walsh-Hadamard (W-H) codes aiming not only at finding optimal code-set selections but also at assessing its loss of security due to crosstalk. We prove that an inadequate choice of codes can make the crosstalk between active users to become large enough so as to cause the data from the user of interest to be detected by other user. The proposed algorithm for code optimization targets code sets that produce minimum bit error rate (BER) among all codes for a specific number of simultaneous users. This methodology allows us to find optimal code sets for any OCDMA system, regardless the code family used and the number of active users. This procedure is crucial for circumventing the unexpected lack of security due to crosstalk. We also show that a SPECTS-OCDMA system based on W-H 32(64) fundamentally limits the number of simultaneous users to 4(8) with no security violation due to crosstalk. More importantly, we prove that only a small fraction of the available code sets is actually immune to crosstalk with acceptable BER (<10(-9)) i.e., approximately 0.5% for W-H 32 with four simultaneous users, and about 1 x 10(-4)% for W-H 64 with eight simultaneous users.
Resumo:
Since a genome is a discrete sequence, the elements of which belong to a set of four letters, the question as to whether or not there is an error-correcting code underlying DNA sequences is unavoidable. The most common approach to answering this question is to propose a methodology to verify the existence of such a code. However, none of the methodologies proposed so far, although quite clever, has achieved that goal. In a recent work, we showed that DNA sequences can be identified as codewords in a class of cyclic error-correcting codes known as Hamming codes. In this paper, we show that a complete intron-exon gene, and even a plasmid genome, can be identified as a Hamming code codeword as well. Although this does not constitute a definitive proof that there is an error-correcting code underlying DNA sequences, it is the first evidence in this direction.
Resumo:
The hierarchy of the segmentation cascade responsible for establishing the Drosophila body plan is composed by gap, pair-rule and segment polarity genes. However, no pair-rule stripes are formed in the anterior regions of the embryo. This lack of stripe formation, as well as other evidence from the literature that is further investigated here, led us to the hypothesis that anterior gap genes might be involved in a combinatorial mechanism responsible for repressing the cis-regulatory modules (CRMs) of hairy (h), even-skipped (eve), runt (run), and fushi-tarazu (ftz) anterior-most stripes. In this study, we investigated huckebein (hkb), which has a gap expression domain at the anterior tip of the embryo. Using genetic methods we were able to detect deviations from the wild-type patterns of the anterior-most pair-rule stripes in different genetic backgrounds, which were consistent with Hkb-mediated repression. Moreover, we developed an image processing tool that, for the most part, confirmed our assumptions. Using an hkb misexpression system, we further detected specific repression on anterior stripes. Furthermore, bioinformatics analysis predicted an increased significance of binding site clusters in the CRMs of h 1, eve 1, run 1 and ftz 1 when Hkb was incorporated in the analysis, indicating that Hkb plays a direct role in these CRMs. We further discuss that Hkb and Slp1, which is the other previously identified common repressor of anterior stripes, might participate in a combinatorial repression mechanism controlling stripe CRMs in the anterior parts of the embryo and define the borders of these anterior stripes. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
Abstract Background Overflow metabolism is an undesirable characteristic of aerobic cultures of Saccharomyces cerevisiae during biomass-directed processes. It results from elevated sugar consumption rates that cause a high substrate conversion to ethanol and other bi-products, severely affecting cell physiology, bioprocess performance, and biomass yields. Fed-batch culture, where sucrose consumption rates are controlled by the external addition of sugar aiming at its low concentrations in the fermentor, is the classical bioprocessing alternative to prevent sugar fermentation by yeasts. However, fed-batch fermentations present drawbacks that could be overcome by simpler batch cultures at relatively high (e.g. 20 g/L) initial sugar concentrations. In this study, a S. cerevisiae strain lacking invertase activity was engineered to transport sucrose into the cells through a low-affinity and low-capacity sucrose-H+ symport activity, and the growth kinetics and biomass yields on sucrose analyzed using simple batch cultures. Results We have deleted from the genome of a S. cerevisiae strain lacking invertase the high-affinity sucrose-H+ symporter encoded by the AGT1 gene. This strain could still grow efficiently on sucrose due to a low-affinity and low-capacity sucrose-H+ symport activity mediated by the MALx1 maltose permeases, and its further intracellular hydrolysis by cytoplasmic maltases. Although sucrose consumption by this engineered yeast strain was slower than with the parental yeast strain, the cells grew efficiently on sucrose due to an increased respiration of the carbon source. Consequently, this engineered yeast strain produced less ethanol and 1.5 to 2 times more biomass when cultivated in simple batch mode using 20 g/L sucrose as the carbon source. Conclusion Higher cell densities during batch cultures on 20 g/L sucrose were achieved by using a S. cerevisiae strain engineered in the sucrose uptake system. Such result was accomplished by effectively reducing sucrose uptake by the yeast cells, avoiding overflow metabolism, with the concomitant reduction in ethanol production. The use of this modified yeast strain in simpler batch culture mode can be a viable option to more complicated traditional sucrose-limited fed-batch cultures for biomass-directed processes of S. cerevisiae.
Resumo:
Reinforced concrete beam elements are submitted to applicable loads along their life cycle that cause shear and torsion. These elements may be subject to only shear, pure torsion or both, torsion and shear combined. The Brazilian Standard Code ABNT NBR 6118:2007 [1] fixes conditions to calculate the transverse reinforcement area in beam reinforced concrete elements, using two design models, based on the strut and tie analogy model, first studied by Mörsch [2]. The strut angle θ (theta) can be considered constant and equal to 45º (Model I), or varying between 30º and 45º (Model II). In the case of transversal ties (stirrups), the variation of angle α (alpha) is between 45º and 90º. When the equilibrium torsion is required, a resistant model based on space truss with hollow section is considered. The space truss admits an inclination angle θ between 30º and 45º, in accordance with beam elements subjected to shear. This paper presents a theoretical study of models I and II for combined shear and torsion, in which ranges the geometry and intensity of action in reinforced concrete beams, aimed to verify the consumption of transverse reinforcement in accordance with the calculation model adopted As the strut angle on model II ranges from 30º to 45º, transverse reinforcement area (Asw) decreases, and total reinforcement area, which includes longitudinal torsion reinforcement (Asℓ), increases. It appears that, when considering model II with strut angle above 40º, under shear only, transverse reinforcement area increases 22% compared to values obtained using model I.
Resumo:
Máster Universitario en Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)