956 resultados para sequence of functions


Relevância:

90.00% 90.00%

Publicador:

Resumo:

A profile is a finite sequence of vertices of a graph. The set of all vertices of the graph which minimises the sum of the distances to the vertices of the profile is the median of the profile. Any subset of the vertex set such that it is the median of some profile is called a median set. The number of median sets of a graph is defined to be the median number of the graph. In this paper, we identify the median sets of various classes of graphs such as Kp − e, Kp,q forP > 2, and wheel graph and so forth. The median numbers of these graphs and hypercubes are found out, and an upper bound for the median number of even cycles is established.We also express the median number of a product graph in terms of the median number of their factors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An alkaline protease gene (Eap) was isolated for the first time from a marine fungus, Engyodontium album. Eap consists of an open reading frame of 1,161 bp encoding a prepropeptide consisting of 387 amino acids with a calculated molecular mass of 40.923 kDa. Homology comparison of the deduced amino acid sequence of Eap with other known proteins indicated that Eap encode an extracellular protease that belongs to the subtilase family of serine protease (Family S8). A comparative homology model of the Engyodontium album protease (EAP) was developed using the crystal structure of proteinase K. The model revealed that EAP has broad substrate specificity similar to Proteinase K with preference for bulky hydrophobic residues at P1 and P4. Also, EAP is suggested to have two disulfide bonds and more than two Ca2? binding sites in its 3D structure; both of which are assumed to contribute to the thermostable nature of the protein.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

There are a number of genes involved in the regulation of functional process in marine bivalves. In the case of pearl oyster, some of these genes have major role in the immune/defence function and biomineralization process involved in the pearl formation in them. As secondary filter feeders, pearl oysters are exposed to various kinds of stressors like bacteria, viruses, pesticides, industrial wastes, toxic metals and petroleum derivatives, making susceptible to diseases. Environmental changes and ambient stress also affect non-specific immunity, making the organisms vulnerable to infections. These stressors can trigger various cellular responses in the animals in their efforts to counteract the ill effects of the stress on them. These include the expression of defence related genes which encode factors such as antioxidant genes, pattern recognition receptor proteins etc. One of the strategies to combat these problems is to get insight into the disease resistance genes, and use them for disease control and health management. Similarly, although it is known that formation of pearl in molluscs is mediated by specialized proteins which are in turn regulated by specific genes encoding them, there is a paucity of sufficient information on these genes.In view of the above facts, studies on the defence related and pearl forming genes of the pearl oyster assumes importance from the point of view of both sustainable fishery management and aquaculture. At present, there is total lack of sufficient knowledge on the functional genes and their expressions in the Indian pearl oyster Pinctada fucata. Hence this work was taken up to identify and characterize the defence related and pearl forming genes, and study their expression through molecular means, in the Indian pearl oyster Pinctada fucata which are economically important for aquaculture at the southeast coast of India. The present study has successfully carried out the molecular identification, characterization and expression analysis of defence related antioxidant enzyme genes and pattern recognition proteins genes which play vital role in the defence against biotic and abiotic stressors. Antioxidant enzyme genes viz., Cu/Zn superoxide dismutase (Cu/Zn SOD), glutathione peroxidise (GPX) and glutathione-S-transferase (GST) were studied. Concerted approaches using the various molecular tools like polymerase chain reaction (PCR), random amplification of cDNA ends (RACE), molecular cloning and sequencing have resulted in the identification and characterization of full length sequences (924 bp) of the Cu/Zn SOD, most important antioxidant enzyme gene. BLAST search in NCBI confirmed the identity of the gene as Cu/Zn SOD. The presence of the characteristic amino acid sequences such as copper/zinc binding residues, family signature sequences and signal peptides were found out. Multiple sequence alignment comparison and phylogenetic analysis of the nucleotide and amino acid sequences using bioinformatics tools like BioEdit,MEGA etc revealed that the sequences were found to contain regions of diversity as well as homogeneity. Close evolutionary relationship between P. fucata and other aquatic invertebrates was revealed from the phylogenetic tree constructed using SOD amino acid sequence of P. fucata and other invertebrates as well as vertebrates

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The motion of a viscous incompressible fluid flow in bounded domains with a smooth boundary can be described by the nonlinear Navier-Stokes equations. This description corresponds to the so-called Eulerian approach. We develop a new approximation method for the Navier-Stokes equations in both the stationary and the non-stationary case by a suitable coupling of the Eulerian and the Lagrangian representation of the flow, where the latter is defined by the trajectories of the particles of the fluid. The method leads to a sequence of uniquely determined approximate solutions with a high degree of regularity containing a convergent subsequence with limit function v such that v is a weak solution of the Navier-Stokes equations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

ZUSAMMENFASSUNG: Proteinkinasen übernehmen zentrale Aufgaben in der Signaltransduktion höherer Zellen. Dabei ist die cAMP-abhängige Proteinkinase (PKA) bezüglich ihrer Struktur und Funktion eine der am besten charakterisierten Proteinkinasen. Trotzdem ist wenig über direkte Interaktionspartner der katalytischen Untereinheiten (PKA-C) bekannt. In einem Split-Ubiquitin basiertem Yeast Two Hybrid- (Y2H-)System wurden potenzielle Interaktionspartner der PKA-C identifiziert. Als Bait wurden sowohl die humane Hauptisoform Cα (hCα) als auch die Proteinkinase X (PrKX) eingesetzt. Nach der Bestätigung der Funktionalität der PKA-C-Baitproteine, dem Nachweis der Expression und der Interaktion mit dem bekannten Interaktionspartner PKI wurde ein Y2H-Screen gegen eine Mausembryo-cDNA-Expressionsbibliothek durchgeführt. Von 2*10^6 Klonen wurden 76 Kolonien isoliert, die ein mit PrKX interagierendes Preyprotein exprimierten. Über die Sequenzierung der enthaltenen Prey-Vektoren wurden 25 unterschiedliche, potenzielle Interaktionspartner identifiziert. Für hCα wurden über 2*10^6 S. cerevisiae-Kolonien untersucht, von denen 1.959 positiv waren (1.663 unter erhöhter Stringenz). Über die Sequenzierung von ca. 10% der Klone (168) konnten Sequenzen für 67 verschiedene, potenzielle Interaktionspartner der hCα identifiziert werden. 15 der Preyproteine wurden in beiden Screens identifiziert. Die PKA-C-spezifische Wechselwirkung der insgesamt 77 Preyproteine wurde im Bait Dependency Test gegen largeT, ein Protein ohne Bezug zum PKA-System, untersucht. Aus den PKA-C-spezifischen Bindern wurden die löslichen Preyproteine AMY-1, Bax72-192, Fabp3, Gng11, MiF, Nm23-M1, Nm23-M2, Sssca1 und VASP256-375 für die weitere in vitro-Validierung ausgewählt. Die Interaktion von FLAG-Strep-Strep-hCα (FSS-hCα) mit den über Strep-Tactin aus der rekombinanten Expression in E. coli gereinigten One-STrEP-HA-Proteinen (SSHA-Proteine) wurde über Koimmunpräzipitation für SSHA-Fabp3, -Nm23-M1, -Nm23-M2, -Sssca1 und -VASP256-375 bestätigt. In SPR-Untersuchungen, für die hCα kovalent an die Oberfläche eines CM5-Sensorchips gekoppelt wurde, wurden die ATP/Mg2+-Abhängigkeit der Bindungen sowie differentielle Effekte der ATP-kompetitiven Inhibitoren H89 und HA-1077 untersucht. Freie hCα, die vor der Injektion zu den SSHA-Proteinen gegeben wurde, kompetierte im Gegensatz zu FSS-PrKX die Bindung an die hCα-Oberfläche. Erste kinetische Analysen lieferten Gleichgewichtsdissoziationskonstanten im µM- (SSHA-Fabp3, -Sssca1), nM- (SSHA-Nm23-M1, –M2) bzw. pM- (SSHA-VASP256-375) Bereich. In funktionellen Analysen konnte eine Phosphorylierung von SSHA-Sssca1 und VASP256-375 durch hCα und FSS-PrKX im Autoradiogramm nachgewiesen werden. SSHA-VASP256-375 zeigte zudem eine starke Inhibition von hCα im Mobility Shift-Assay. Dieser inhibitorische Effekt sowie die hohe Affinität konnten jedoch auf eine Kombination aus der Linkersequenz des Vektors und dem N-Terminus von VASP256-375 zurückgeführt werden. Über die Wechselwirkungen der hier identifizierten Interaktionspartner Fabp3, Nm23-M1 und Nm23-M2 mit hCα können in Folgeuntersuchungen neue PKA-Funktionen insbesondere im Herzen sowie während der Zellmigration aufgedeckt werden. Sssca1 stellt dagegen ein neues, näher zu charakterisierendes PKA-Substrat dar.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis investigates a method for human-robot interaction (HRI) in order to uphold productivity of industrial robots like minimization of the shortest operation time, while ensuring human safety like collision avoidance. For solving such problems an online motion planning approach for robotic manipulators with HRI has been proposed. The approach is based on model predictive control (MPC) with embedded mixed integer programming. The planning strategies of the robotic manipulators mainly considered in the thesis are directly performed in the workspace for easy obstacle representation. The non-convex optimization problem is approximated by a mixed-integer program (MIP). It is further effectively reformulated such that the number of binary variables and the number of feasible integer solutions are drastically decreased. Safety-relevant regions, which are potentially occupied by the human operators, can be generated online by a proposed method based on hidden Markov models. In contrast to previous approaches, which derive predictions based on probability density functions in the form of single points, such as most likely or expected human positions, the proposed method computes safety-relevant subsets of the workspace as a region which is possibly occupied by the human at future instances of time. The method is further enhanced by combining reachability analysis to increase the prediction accuracy. These safety-relevant regions can subsequently serve as safety constraints when the motion is planned by optimization. This way one arrives at motion plans that are safe, i.e. plans that avoid collision with a probability not less than a predefined threshold. The developed methods have been successfully applied to a developed demonstrator, where an industrial robot works in the same space as a human operator. The task of the industrial robot is to drive its end-effector according to a nominal sequence of grippingmotion-releasing operations while no collision with a human arm occurs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This research investigates what information German Fairtrade coffee consumers search for during pre-purchase information seeking and to what extent information is retrieved. Furthermore, the sequence of the information search as well as the degree of cognitive involvement is highlighted. The role of labeling, the importance of additional ethical information and its quality in terms of concreteness as well as the importance of product price and organic origin are addressed. A set of information relevant to Fairtrade consumers was tested by means of the Information Display Matrix (IDM) method with 389 Fairtrade consumers. Results show that prior to purchase, information on product packages plays an important role and is retrieved rather extensively, but search strategies that reduce the information processing effort are applied as well. Furthermore, general information is preferred over specific information. Results of two regression analyses indicate that purchase decisions are related to search behavior variables rather than to socio-demographic variables and purchase motives. In order to match product information with consumers’ needs, marketers should offer information that is reduced to the central aspects of Fairtrade.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The central thesis of this report is that human language is NP-complete. That is, the process of comprehending and producing utterances is bounded above by the class NP, and below by NP-hardness. This constructive complexity thesis has two empirical consequences. The first is to predict that a linguistic theory outside NP is unnaturally powerful. The second is to predict that a linguistic theory easier than NP-hard is descriptively inadequate. To prove the lower bound, I show that the following three subproblems of language comprehension are all NP-hard: decide whether a given sound is possible sound of a given language; disambiguate a sequence of words; and compute the antecedents of pronouns. The proofs are based directly on the empirical facts of the language user's knowledge, under an appropriate idealization. Therefore, they are invariant across linguistic theories. (For this reason, no knowledge of linguistic theory is needed to understand the proofs, only knowledge of English.) To illustrate the usefulness of the upper bound, I show that two widely-accepted analyses of the language user's knowledge (of syntactic ellipsis and phonological dependencies) lead to complexity outside of NP (PSPACE-hard and Undecidable, respectively). Next, guided by the complexity proofs, I construct alternate linguisitic analyses that are strictly superior on descriptive grounds, as well as being less complex computationally (in NP). The report also presents a new framework for linguistic theorizing, that resolves important puzzles in generative linguistics, and guides the mathematical investigation of human language.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The control of aerial gymnastic maneuvers is challenging because these maneuvers frequently involve complex rotational motion and because the performer has limited control of the maneuver during flight. A performer can influence a maneuver using a sequence of limb movements during flight. However, the same sequence may not produce reliable performances in the presence of off-nominal conditions. How do people compensate for variations in performance to reliably produce aerial maneuvers? In this report I explore the role that passive dynamic stability may play in making the performance of aerial maneuvers simple and reliable. I present a control strategy comprised of active and passive components for performing robot front somersaults in the laboratory. I show that passive dynamics can neutrally stabilize the layout somersault which involves an "inherently unstable" rotation about the intermediate principal axis. And I show that a strategy that uses open loop joint torques plus passive dynamics leads to more reliable 1 1/2 twisting front somersaults in simulation than a strategy that uses prescribed limb motion. Results are presented from laboratory experiments on gymnastic robots, from dynamic simulation of humans and robots, and from linear stability analyses of these systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

En este trabajo se describe la naturaleza y secuencia de adquisición de las preguntas interrogativas parciales en niños de habla catalana y/o castellana dentro de un marco de análisis según el cual la adquisición de las estructuras lingüísticas se construye gradualmente desde estructuras concretas hasta estructuras más abstractas. La muestra utilizada se compone de 10 niños y niñas procedentes de corpus longitudinales cuyas edades van de los 17 meses a los 3 años. El análisis se ha realizado atendiendo a la estructura sintáctica de la oración, los errores, los pronombres y adverbios interrogativos, y la tipología verbal. Los resultados muestran que la secuencia de adquisición pasa por un momento inicial caracterizado por producciones estereotipadas o fórmulas, durante el cual sólo aparecen algunas partículas interrogativas en estructuras muy concretas. Posteriormente la interrogación aparece con otros pronombres y adverbios y se diversifica a otros verbos, además, no se observan errores en la construcción sintáctica. Estos resultados suponen un hecho diferencial respecto de estudios previos en lengua inglesa

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Exam questions and solutions in LaTex. Diagrams for the questions are all together in the support.zip file, as .eps files

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Exam questions and solutions in PDF

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract 1: Social Networks such as Twitter are often used for disseminating and collecting information during natural disasters. The potential for its use in Disaster Management has been acknowledged. However, more nuanced understanding of the communications that take place on social networks are required to more effectively integrate this information into the processes within disaster management. The type and value of information shared should be assessed, determining the benefits and issues, with credibility and reliability as known concerns. Mapping the tweets in relation to the modelled stages of a disaster can be a useful evaluation for determining the benefits/drawbacks of using data from social networks, such as Twitter, in disaster management.A thematic analysis of tweets’ content, language and tone during the UK Storms and Floods 2013/14 was conducted. Manual scripting was used to determine the official sequence of events, and classify the stages of the disaster into the phases of the Disaster Management Lifecycle, to produce a timeline. Twenty- five topics discussed on Twitter emerged, and three key types of tweets, based on the language and tone, were identified. The timeline represents the events of the disaster, according to the Met Office reports, classed into B. Faulkner’s Disaster Management Lifecycle framework. Context is provided when observing the analysed tweets against the timeline. This illustrates a potential basis and benefit for mapping tweets into the Disaster Management Lifecycle phases. Comparing the number of tweets submitted in each month with the timeline, suggests users tweet more as an event heightens and persists. Furthermore, users generally express greater emotion and urgency in their tweets.This paper concludes that the thematic analysis of content on social networks, such as Twitter, can be useful in gaining additional perspectives for disaster management. It demonstrates that mapping tweets into the phases of a Disaster Management Lifecycle model can have benefits in the recovery phase, not just in the response phase, to potentially improve future policies and activities. Abstract2: The current execution of privacy policies, as a mode of communicating information to users, is unsatisfactory. Social networking sites (SNS) exemplify this issue, attracting growing concerns regarding their use of personal data and its effect on user privacy. This demonstrates the need for more informative policies. However, SNS lack the incentives required to improve policies, which is exacerbated by the difficulties of creating a policy that is both concise and compliant. Standardization addresses many of these issues, providing benefits for users and SNS, although it is only possible if policies share attributes which can be standardized. This investigation used thematic analysis and cross- document structure theory, to assess the similarity of attributes between the privacy policies (as available in August 2014), of the six most frequently visited SNS globally. Using the Jaccard similarity coefficient, two types of attribute were measured; the clauses used by SNS and the coverage of forty recommendations made by the UK Information Commissioner’s Office. Analysis showed that whilst similarity in the clauses used was low, similarity in the recommendations covered was high, indicating that SNS use different clauses, but to convey similar information. The analysis also showed that low similarity in the clauses was largely due to differences in semantics, elaboration and functionality between SNS. Therefore, this paper proposes that the policies of SNS already share attributes, indicating the feasibility of standardization and five recommendations are made to begin facilitating this, based on the findings of the investigation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The anxiolytic properties of ethanol (1 g/kg, 15% dose, i.p.) were studied in two experiments with rats involving incentive downshifts from a 32% to a 4% sucrose solution. In Experiment 1, alcohol administration before a downshift from 32% to 4% sucrose prevented the development of consummatory suppression (consummatory successive negative contrast, cSNC). In Experiment 2, ethanol prevented the attenuating effects of partial reinforcement (random sequence of 32% sucrose and nothing) on cSNC, causing a retardation of recovery from contrast. These effects of ethanol on cSNC are analogous to those described for the benzodiazepine anxiolytic chlordiazepoxide, suggesting that at least some of its anxiolytic effects are mediated by the same mechanisms.