461 resultados para équipements adaptés
Resumo:
Most of the world’s languages lack electronic word form dictionaries. The linguists who gather such dictionaries could be helped with an efficient morphology workbench that adapts to different environments and uses. A widely usable workbench could be characterized, ideally, as generally applicable, extensible, and freely available (GEA). It seems that such a solution could be implemented in the framework of finite-state methods. The current work defines the GEA desiderata and starts a series of articles concerning these desiderata in finite- state morphology. Subsequent parts will review the state of the art and present an action plan toward creating a widely usable finite-state morphology workbench.
Resumo:
This research is connected with an education development project for the four-year-long officer education program at the National Defence University. In this curriculum physics was studied in two alternative course plans namely scientific and general. Observations connected to the later one e.g. student feedback and learning outcome gave indications that action was needed to support the course. The reform work was focused on the production of aligned course related instructional material. The learning material project produced a customized textbook set for the students of the general basic physics course. The research adapts phases that are typical in Design Based Research (DBR). The research analyses the feature requirements for physics textbook aimed at a specific sector and frames supporting instructional material development, and summarizes the experiences gained in the learning material project when the selected frames have been applied. The quality of instructional material is an essential part of qualified teaching. The goal of instructional material customization is to increase the product's customer centric nature and to enhance its function as a support media for the learning process. Textbooks are still one of the core elements in physics teaching. The idea of a textbook will remain but the form and appearance may change according to the prevailing technology. The work deals with substance connected frames (demands of a physics textbook according to the PER-viewpoint, quality thinking in educational material development), frames of university pedagogy and instructional material production processes. A wide knowledge and understanding of different frames are useful in development work, if they are to be utilized to aid inspiration without limiting new reasoning and new kinds of models. Applying customization even in the frame utilization supports creative and situation aware design and diminishes the gap between theory and practice. Generally, physics teachers produce their own supplementary instructional material. Even though customization thinking is not unknown the threshold to produce an entire textbook might be high. Even though the observations here are from the general physics course at the NDU, the research gives tools also for development in other discipline related educational contexts. This research is an example of an instructional material development work together the questions it uncovers, and presents thoughts when textbook customization is rewarding. At the same time, the research aims to further creative customization thinking in instruction and development. Key words: Physics textbook, PER (Physics Education Research), Instructional quality, Customization, Creativity
Resumo:
802.11 WLANs are characterized by high bit error rate and frequent changes in network topology. The key feature that distinguishes WLANs from wired networks is the multi-rate transmission capability, which helps to accommodate a wide range of channel conditions. This has a significant impact on higher layers such as routing and transport levels. While many WLAN products provide rate control at the hardware level to adapt to the channel conditions, some chipsets like Atheros do not have support for automatic rate control. We first present a design and implementation of an FER-based automatic rate control state machine, which utilizes the statistics available at the device driver to find the optimal rate. The results show that the proposed rate switching mechanism adapts quite fast to the channel conditions. The hop count metric used by current routing protocols has proven itself for single rate networks. But it fails to take into account other important factors in a multi-rate network environment. We propose transmission time as a better path quality metric to guide routing decisions. It incorporates the effects of contention for the channel, the air time to send the data and the asymmetry of links. In this paper, we present a new design for a multi-rate mechanism as well as a new routing metric that is responsive to the rate. We address the issues involved in using transmission time as a metric and presents a comparison of the performance of different metrics for dynamic routing.
Resumo:
Denoising of medical images in wavelet domain has potential application in transmission technologies such as teleradiology. This technique becomes all the more attractive when we consider the progressive transmission in a teleradiology system. The transmitted images are corrupted mainly due to noisy channels. In this paper, we present a new real time image denoising scheme based on limited restoration of bit-planes of wavelet coefficients. The proposed scheme exploits the fundamental property of wavelet transform - its ability to analyze the image at different resolution levels and the edge information associated with each sub-band. The desired bit-rate control is achieved by applying the restoration on a limited number of bit-planes subject to the optimal smoothing. The proposed method adapts itself to the preference of the medical expert; a single parameter can be used to balance the preservation of (expert-dependent) relevant details against the degree of noise reduction. The proposed scheme relies on the fact that noise commonly manifests itself as a fine-grained structure in image and wavelet transform allows the restoration strategy to adapt itself according to directional features of edges. The proposed approach shows promising results when compared with unrestored case, in context of error reduction. It also has capability to adapt to situations where noise level in the image varies and with the changing requirements of medical-experts. The applicability of the proposed approach has implications in restoration of medical images in teleradiology systems. The proposed scheme is computationally efficient.
Solution structure of O-glycosylated C-terminal leucine zipper domain of human salivary mucin (MUC7)
Resumo:
Solution structures of a 23 residue glycopeptide II (KIS* RFLLYMKNLLNRIIDDMVEQ, where * denotes the glycan Gal-beta-(1-3)-alpha-GalNAc) and its deglycosylated counterpart I derived from the C-terminal leucine zipper domain of low molecular weight human salivary mucin (MUC7) were studied using CD, NMR spectroscopy and molecular modeling. The peptide I was synthesized using the Fmoc chemistry following the conventional procedure and the glycopeptide II was synthesized incorporating the O-glycosylated building block (N alpha-Fmoc-Ser-[Ac-4,-beta-D-Gal-(1,3)-Ac(2)alpha-D-GalN(3)]-OPfp) at the appropriate position in stepwise assembly of peptide chain. Solution structures of these glycosylated and nonglycosylated peptides were studied in water and in the presence of 50% of an organic cosolvent, trifluoroethanol (TFE) using circular dichroism (CD), and in 50% TFE using two-dimensional proton nuclear magnetic resonance (2D H-1 NMR) spectroscopy. CD spectra in aqueous medium indicate that the apopeptide I adapts, mostly, a beta-sheet conformation whereas the glycopeptide II assumes helical structure. This transition in the secondary structure, upon glycosylation, demonstrates that the carbohydrate moiety exerts significant effect on the peptide backbone conformation. However, in 50% TFE both the peptides show pronounced helical structure. Sequential and medium range NOEs, C alpha H chemical shift perturbations, (3)J(NH:C alpha H) couplings and deuterium exchange rates of the amide proton resonances in water containing 50% TFE indicate that the peptide I adapts alpha-helical structure from Ile2-Val21 and the glycopeptide II adapts alpha-helical structure from Ser3-Glu22. The observation of continuous stretch of helix in both the peptides as observed by both NMR and CD spectroscopy strongly suggests that the C-terminal domain of MUC7 with heptad repeats of leucines or methionine residues may be stabilized by dimeric leucine zipper motif. The results reported herein may be invaluable in understanding the aggregation (or dimerization) of MUC7 glycoprotein which would eventually have implications in determining its structure-function relationship.
Resumo:
In this dissertation I study language complexity from a typological perspective. Since the structuralist era, it has been assumed that local complexity differences in languages are balanced out in cross-linguistic comparisons and that complexity is not affected by the geopolitical or sociocultural aspects of the speech community. However, these assumptions have seldom been studied systematically from a typological point of view. My objective is to define complexity so that it is possible to compare it across languages and to approach its variation with the methods of quantitative typology. My main empirical research questions are: i) does language complexity vary in any systematic way in local domains, and ii) can language complexity be affected by the geographical or social environment? These questions are studied in three articles, whose findings are summarized in the introduction to the dissertation. In order to enable cross-language comparison, I measure complexity as the description length of the regularities in an entity; I separate it from difficulty, focus on local instead of global complexity, and break it up into different types. This approach helps avoid the problems that plagued earlier metrics of language complexity. My approach to grammar is functional-typological in nature, and the theoretical framework is basic linguistic theory. I delimit the empirical research functionally to the marking of core arguments (the basic participants in the sentence). I assess the distributions of complexity in this domain with multifactorial statistical methods and use different sampling strategies, implementing, for instance, the Greenbergian view of universals as diachronic laws of type preference. My data come from large and balanced samples (up to approximately 850 languages), drawn mainly from reference grammars. The results suggest that various significant trends occur in the marking of core arguments in regard to complexity and that complexity in this domain correlates with population size. These results provide evidence that linguistic patterns interact among themselves in terms of complexity, that language structure adapts to the social environment, and that there may be cognitive mechanisms that limit complexity locally. My approach to complexity and language universals can therefore be successfully applied to empirical data and may serve as a model for further research in these areas.
Resumo:
Mycobacterium tuberculosis is known to reside latently in a significant fraction of the human population. Although the bacterium possesses an aerobic mode of metabolism, it adapts to persistence under hypoxic conditions such as those encountered in granulomas. While in mammalian systems hypoxia is a recognized DNA-damaging stress, aspects of DNA repair in mycobacteria under such conditions have not been studied. We subjected Mycobacterium smegmatis, a model organism, to the Wayne's protocol of hypoxia. Analysis of the mRNA of a key DNA repair enzyme, uracil DNA glycosylase (Ung), by real-time reverse transcriptase PCR (RT-PCR) revealed its downregulation during hypoxia. However, within an hour of recovery of the culture under normal oxygen levels, the Ung mRNA was restored. Analysis of Ung by immunoblotting and enzyme assays supported the RNA analysis results. To understand its physiological significance, we misexpressed Ung in M. smegmatis by using a hypoxia-responsive promoter of narK2 from M. tuberculosis. Although the misexpression of Ung during hypoxia decreased C-to-T mutations, it compromised bacterial survival upon recovery at normal oxygen levels. RT-PCR analysis of other base excision repair gene transcripts (UdgB and Fpg) suggested that these DNA repair functions also share with Ung the phenomenon of downregulation during hypoxia and recovery with return to normal oxygen conditions. We discuss the potential utility of this phenomenon in developing attenuated strains of mycobacteria.
Resumo:
We consider a problem of providing mean delay and average throughput guarantees in random access fading wireless channels using CSMA/CA algorithm. This problem becomes much more challenging when the scheduling is distributed as is the case in a typical local area wireless network. We model the CSMA network using a novel queueing network based approach. The optimal throughput per device and throughput optimal policy in an M device network is obtained. We provide a simple contention control algorithm that adapts the attempt probability based on the network load and obtain bounds for the packet transmission delay. The information we make use of is the number of devices in the network and the queue length (delayed) at each device. The proposed algorithms stay within the requirements of the IEEE 802.11 standard.
Resumo:
Two models for AF relaying, namely, fixed gain and fixed power relaying, have been extensively studied in the literature given their ability to harness spatial diversity. In fixed gain relaying, the relay gain is fixed but its transmit power varies as a function of the source-relay channel gain. In fixed power relaying, the relay transmit power is fixed, but its gain varies. We revisit and generalize the fundamental two-hop AF relaying model. We present an optimal scheme in which an average power constrained AF relay adapts its gain and transmit power to minimize the symbol error probability (SEP) at the destination. Also derived are insightful and practically amenable closed-form bounds for the optimal relay gain. We then analyze the SEP of MPSK, derive tight bounds for it, and characterize the diversity order for Rayleigh fading. Also derived is an SEP approximation that is accurate to within 0.1 dB. Extensive results show that the scheme yields significant energy savings of 2.0-7.7 dB at the source and relay. Optimal relay placement for the proposed scheme is also characterized, and is different from fixed gain or power relaying. Generalizations to MQAM and other fading distributions are also discussed.
Resumo:
Ubiquitous Computing is an emerging paradigm which facilitates user to access preferred services, wherever they are, whenever they want, and the way they need, with zero administration. While moving from one place to another the user does not need to specify and configure their surrounding environment, the system initiates necessary adaptation by itself to cope up with the changing environment. In this paper we propose a system to provide context-aware ubiquitous multimedia services, without user’s intervention. We analyze the context of the user based on weights, identify the UMMS (Ubiquitous Multimedia Service) based on the collected context information and user profile, search for the optimal server to provide the required service, then adapts the service according to user’s local environment and preferences, etc. The experiment conducted several times with different context parameters, their weights and various preferences for a user. The results are quite encouraging.
Resumo:
Amplify-and-forward (AF) relay based cooperation has been investigated in the literature given its simplicity and practicality. Two models for AF, namely, fixed gain and fixed power relaying, have been extensively studied. In fixed gain relaying, the relay gain is fixed but its transmit power varies as a function of the source-relay (SR) channel gain. In fixed power relaying, the relay's instantaneous transmit power is fixed, but its gain varies. We propose a general AF cooperation model in which an average transmit power constrained relay jointly adapts its gain and transmit power as a function of the channel gains. We derive the optimal AF gain policy that minimizes the fading- averaged symbol error probability (SEP) of MPSK and present insightful and tractable lower and upper bounds for it. We then analyze the SEP of the optimal policy. Our results show that the optimal scheme is up to 39.7% and 47.5% more energy-efficient than fixed power relaying and fixed gain relaying, respectively. Further, the weaker the direct source-destination link, the greater are the energy-efficiency gains.
Resumo:
Many studies investigating the effect of human social connectivity structures (networks) and human behavioral adaptations on the spread of infectious diseases have assumed either a static connectivity structure or a network which adapts itself in response to the epidemic (adaptive networks). However, human social connections are inherently dynamic or time varying. Furthermore, the spread of many infectious diseases occur on a time scale comparable to the time scale of the evolving network structure. Here we aim to quantify the effect of human behavioral adaptations on the spread of asymptomatic infectious diseases on time varying networks. We perform a full stochastic analysis using a continuous time Markov chain approach for calculating the outbreak probability, mean epidemic duration, epidemic reemergence probability, etc. Additionally, we use mean-field theory for calculating epidemic thresholds. Theoretical predictions are verified using extensive simulations. Our studies have uncovered the existence of an ``adaptive threshold,'' i.e., when the ratio of susceptibility (or infectivity) rate to recovery rate is below the threshold value, adaptive behavior can prevent the epidemic. However, if it is above the threshold, no amount of behavioral adaptations can prevent the epidemic. Our analyses suggest that the interaction patterns of the infected population play a major role in sustaining the epidemic. Our results have implications on epidemic containment policies, as awareness campaigns and human behavioral responses can be effective only if the interaction levels of the infected populace are kept in check.
Resumo:
We propose to employ bilateral filters to solve the problem of edge detection. The proposed methodology presents an efficient and noise robust method for detecting edges. Classical bilateral filters smooth images without distorting edges. In this paper, we modify the bilateral filter to perform edge detection, which is the opposite of bilateral smoothing. The Gaussian domain kernel of the bilateral filter is replaced with an edge detection mask, and Gaussian range kernel is replaced with an inverted Gaussian kernel. The modified range kernel serves to emphasize dissimilar regions. The resulting approach effectively adapts the detection mask according as the pixel intensity differences. The results of the proposed algorithm are compared with those of standard edge detection masks. Comparisons of the bilateral edge detector with Canny edge detection algorithm, both after non-maximal suppression, are also provided. The results of our technique are observed to be better and noise-robust than those offered by methods employing masks alone, and are also comparable to the results from Canny edge detector, outperforming it in certain cases.
Resumo:
To combine the advantages of both stability and optimality-based designs, a single network adaptive critic (SNAC) aided nonlinear dynamic inversion approach is presented in this paper. Here, the gains of a dynamic inversion controller are selected in such a way that the resulting controller behaves very close to a pre-synthesized SNAC controller in the output regulation sense. Because SNAC is based on optimal control theory, it makes the dynamic inversion controller operate nearly optimal. More important, it retains the two major benefits of dynamic inversion, namely (i) a closed-form expression of the controller and (ii) easy scalability to command tracking applications without knowing the reference commands a priori. An extended architecture is also presented in this paper that adapts online to system modeling and inversion errors, as well as reduced control effectiveness, thereby leading to enhanced robustness. The strengths of this hybrid method of applying SNAC to optimize an nonlinear dynamic inversion controller is demonstrated by considering a benchmark problem in robotics, that is, a two-link robotic manipulator system. Copyright (C) 2013 John Wiley & Sons, Ltd.
Resumo:
In the domain of manual mechanical assembly, expert knowledge is an important means of supporting assembly planning that leads to fewer issues during actual assembly. Knowledge based systems can be used to provide assembly planners with expert knowledge as advice. However, acquisition of knowledge remains a difficult task to automate, while manual acquisition is tedious, time-consuming, and requires engagement of knowledge engineers with specialist knowledge to understand and translate expert knowledge. This paper describes the development, implementation and preliminary evaluation of a method that asks a series of questions to an expert, so as to automatically acquire necessary diagnostic and remedial knowledge as rules for use in a knowledge based system for advising assembly planners diagnose and resolve issues. The method, called a questioning procedure, organizes its questions around an assembly situation which it presents to the expert as the context, and adapts its questions based on the answers it receives from the expert. (C) 2014 Elsevier Ltd. All rights reserved.