910 resultados para Cosine and Sine Trigonometric Functions
Resumo:
This thesis studied the effect of (i) the number of grating components and (ii) parameter randomisation on root-mean-square (r.m.s.) contrast sensitivity and spatial integration. The effectiveness of spatial integration without external spatial noise depended on the number of equally spaced orientation components in the sum of gratings. The critical area marking the saturation of spatial integration was found to decrease when the number of components increased from 1 to 5-6 but increased again at 8-16 components. The critical area behaved similarly as a function of the number of grating components when stimuli consisted of 3, 6 or 16 components with different orientations and/or phases embedded in spatial noise. Spatial integration seemed to depend on the global Fourier structure of the stimulus. Spatial integration was similar for sums of two vertical cosine or sine gratings with various Michelson contrasts in noise. The critical area for a grating sum was found to be a sum of logarithmic critical areas for the component gratings weighted by their relative Michelson contrasts. The human visual system was modelled as a simple image processor where the visual stimuli is first low-pass filtered by the optical modulation transfer function of the human eye and secondly high-pass filtered, up to the spatial cut-off frequency determined by the lowest neural sampling density, by the neural modulation transfer function of the visual pathways. The internal noise is then added before signal interpretation occurs in the brain. The detection is mediated by a local spatially windowed matched filter. The model was extended to include complex stimuli and its applicability to the data was found to be successful. The shape of spatial integration function was similar for non-randomised and randomised simple and complex gratings. However, orientation and/or phase randomised reduced r.m.s contrast sensitivity by a factor of 2. The effect of parameter randomisation on spatial integration was modelled under the assumption that human observers change the observer strategy from cross-correlation (i.e., a matched filter) to auto-correlation detection when uncertainty is introduced to the task. The model described the data accurately.
Resumo:
Research on temporal-order perception uses temporal-order judgment (TOJ) tasks or synchrony judgment (SJ) tasks in their binary SJ2 or ternary SJ3 variants. In all cases, two stimuli are presented with some temporal delay, and observers judge the order of presentation. Arbitrary psychometric functions are typically fitted to obtain performance measures such as sensitivity or the point of subjective simultaneity, but the parameters of these functions are uninterpretable. We describe routines in MATLAB and R that fit model-based functions whose parameters are interpretable in terms of the processes underlying temporal-order and simultaneity judgments and responses. These functions arise from an independent-channels model assuming arrival latencies with exponential distributions and a trichotomous decision space. Different routines fit data separately for SJ2, SJ3, and TOJ tasks, jointly for any two tasks, or also jointly for the three tasks (for common cases in which two or even the three tasks were used with the same stimuli and participants). Additional routines provide bootstrap p-values and confidence intervals for estimated parameters. A further routine is included that obtains performance measures from the fitted functions. An R package for Windows and source code of the MATLAB and R routines are available as Supplementary Files.
Resumo:
It was recently shown [Phys. Rev. Lett. 110, 227201 (2013)] that the critical behavior of the random-field Ising model in three dimensions is ruled by a single universality class. This conclusion was reached only after a proper taming of the large scaling corrections of the model by applying a combined approach of various techniques, coming from the zero-and positive-temperature toolboxes of statistical physics. In the present contribution we provide a detailed description of this combined scheme, explaining in detail the zero-temperature numerical scheme and developing the generalized fluctuation-dissipation formula that allowed us to compute connected and disconnected correlation functions of the model. We discuss the error evolution of our method and we illustrate the infinite limit-size extrapolation of several observables within phenomenological renormalization. We present an extension of the quotients method that allows us to obtain estimates of the critical exponent a of the specific heat of the model via the scaling of the bond energy and we discuss the self-averaging properties of the system and the algorithmic aspects of the maximum-flow algorithm used.
Resumo:
The transverse momentum dependent parton distribution/fragmentation functions (TMDs) are essential in the factorization of a number of processes like Drell-Yan scattering, vector boson production, semi-inclusive deep inelastic scattering, etc. We provide a comprehensive study of unpolarized TMDs at next-to-next-to-leading order, which includes an explicit calculation of these TMDs and an extraction of their matching coefficients onto their integrated analogues, for all flavor combinations. The obtained matching coefficients are important for any kind of phenomenology involving TMDs. In the present study each individual TMD is calculated without any reference to a specific process. We recover the known results for parton distribution functions and provide new results for the fragmentation functions. The results for the gluon transverse momentum dependent fragmentation functions are presented for the first time at one and two loops. We also discuss the structure of singularities of TMD operators and TMD matrix elements, crossing relations between TMD parton distribution functions and TMD fragmentation functions, and renormalization group equations. In addition, we consider the behavior of the matching coefficients at threshold and make a conjecture on their structure to all orders in perturbation theory.
Resumo:
Actinin and spectrin proteins are members of the Spectrin Family of Actin Crosslinking Proteins. The importance of these proteins in the cytoskeleton is demonstrated by the fact that they are common targets for disease causing mutations. In their most prominent roles, actinin and spectrin are responsible for stabilising and maintaining the muscle architecture during contraction, and providing shape and elasticity to the red blood cell in circulation, respectively. To carry out such roles, actinin and spectrin must possess important mechanical and physical properties. These attributes are desirable when choosing a building block for protein-based nanoconstruction. In this study, I assess the contribution of several disease-associated mutations in the actinin-1 actin binding domain that have recently been linked to a rare platelet disorder, congenital macrothrombocytopenia. I investigate the suitability of both actinin and spectrin proteins as potential building blocks for nanoscale structures, and I evaluate a fusion-based assembly strategy to bring about self-assembly of protein nanostructures. I report that the actinin-1 mutant proteins display increased actin binding compared to WT actinin-1 proteins. I find that both actinin and spectrin proteins exhibit enormous potential as nano-building blocks in terms of their stability and ability to self-assemble, and I successfully design and create homodimeric and heterodimeric bivalent building blocks using the fusion-based assembly strategy. Overall, this study has gathered helpful information that will contribute to furthering the advancement of actinin and spectrin knowledge in terms of their natural functions, and potential unnatural functions in protein nanotechnology.
Resumo:
A new semi-implicit stress integration algorithm for finite strain plasticity (compatible with hyperelas- ticity) is introduced. Its most distinctive feature is the use of different parameterizations of equilibrium and reference configurations. Rotation terms (nonlinear trigonometric functions) are integrated explicitly and correspond to a change in the reference configuration. In contrast, relative Green–Lagrange strains (which are quadratic in terms of displacements) represent the equilibrium configuration implicitly. In addition, the adequacy of several objective stress rates in the semi-implicit context is studied. We para- metrize both reference and equilibrium configurations, in contrast with the so-called objective stress integration algorithms which use coinciding configurations. A single constitutive framework provides quantities needed by common discretization schemes. This is computationally convenient and robust, as all elements only need to provide pre-established quantities irrespectively of the constitutive model. In this work, mixed strain/stress control is used, as well as our smoothing algorithm for the complemen- tarity condition. Exceptional time-step robustness is achieved in elasto-plastic problems: often fewer than one-tenth of the typical number of time increments can be used with a quantifiable effect in accuracy. The proposed algorithm is general: all hyperelastic models and all classical elasto-plastic models can be employed. Plane-stress, Shell and 3D examples are used to illustrate the new algorithm. Both isotropic and anisotropic behavior is presented in elasto-plastic and hyperelastic examples.
Resumo:
This paper considers the implications of the permanent/transitory decomposition of shocks for identification of structural models in the general case where the model might contain more than one permanent structural shock. It provides a simple and intuitive generalization of the influential work of Blanchard and Quah [1989. The dynamic effects of aggregate demand and supply disturbances. The American Economic Review 79, 655–673], and shows that structural equations with known permanent shocks cannot contain error correction terms, thereby freeing up the latter to be used as instruments in estimating their parameters. The approach is illustrated by a re-examination of the identification schemes used by Wickens and Motto [2001. Estimating shocks and impulse response functions. Journal of Applied Econometrics 16, 371–387], Shapiro and Watson [1988. Sources of business cycle fluctuations. NBER Macroeconomics Annual 3, 111–148], King et al. [1991. Stochastic trends and economic fluctuations. American Economic Review 81, 819–840], Gali [1992. How well does the ISLM model fit postwar US data? Quarterly Journal of Economics 107, 709–735; 1999. Technology, employment, and the business cycle: Do technology shocks explain aggregate fluctuations? American Economic Review 89, 249–271] and Fisher [2006. The dynamic effects of neutral and investment-specific technology shocks. Journal of Political Economy 114, 413–451].
Resumo:
This paper examines three functions of music technology in the study of music. Firstly, as a tool, secondly, as an instrument and, lastly, as a medium for thinking. As our societies become increasingly embroiled in digital media for representation and communication, our philosophies of music education need to adapt to integrate these developments while maintaining the essence of music. The foundation of music technology in the 1990s is the digital representation of sound. It is this fundamental shift to a new medium with which to represent sound that carries with it the challenge to address digital technology and its multiple effects on music creation and presentation. In this paper I suggest that music institutions should take a broad and integrated approach to the place of music technology in their courses, based on the understanding of digital representation of sound and these three functions it can serve. Educators should reconsider digital technologies such as synthesizers and computers as music instruments and cognitive amplifiers, not simply as efficient tools.
Resumo:
Engineering asset management (EAM) is a broad discipline and the EAM functions and processes are characterized by its distributed nature. However, engineering asset nowadays mostly relies on self-maintained experiential rule bases and periodic maintenance, which is lacking a collaborative engineering approach. This research proposes a collaborative environment integrated by a service center with domain expertise such as diagnosis, prognosis, and asset operations. The collaborative maintenance chain combines asset operation sites, service center (i.e., maintenance operation coordinator), system provider, first tier collaborators, and maintenance part suppliers. Meanwhile, to realize the automation of communication and negotiation among organizations, multiagent system (MAS) technique is applied to enhance the entire service level. During the MAS design processes, this research combines Prometheus MAS modeling approach with Petri-net modeling methodology and unified modeling language to visualize and rationalize the design processes of MAS. The major contributions of this research include developing a Petri-net enabled Prometheus MAS modeling methodology and constructing a collaborative agent-based maintenance chain framework for integrated EAM.
Resumo:
Using six kinds of lattice types (4×4 ,5×5 , and6×6 square lattices;3×3×3 cubic lattice; and2+3+4+3+2 and4+5+6+5+4 triangular lattices), three different size alphabets (HP ,HNUP , and 20 letters), and two energy functions, the designability of proteinstructures is calculated based on random samplings of structures and common biased sampling (CBS) of proteinsequence space. Then three quantities stability (average energy gap),foldability, and partnum of the structure, which are defined to elucidate the designability, are calculated. The authors find that whatever the type of lattice, alphabet size, and energy function used, there will be an emergence of highly designable (preferred) structure. For all cases considered, the local interactions reduce degeneracy and make the designability higher. The designability is sensitive to the lattice type, alphabet size, energy function, and sampling method of the sequence space. Compared with the random sampling method, both the CBS and the Metropolis Monte Carlo sampling methods make the designability higher. The correlation coefficients between the designability, stability, and foldability are mostly larger than 0.5, which demonstrate that they have strong correlation relationship. But the correlation relationship between the designability and the partnum is not so strong because the partnum is independent of the energy. The results are useful in practical use of the designability principle, such as to predict the proteintertiary structure.
Resumo:
Female sex hormones are known to regulate the adaptive and innate immune functions of the female reproductive tract. This review aims to update our current knowledge of the effects of the sex hormones estradiol and progesterone in the female reproductive tract on innate immunity, antigen presentation, specific immune responses, antibody secretion, genital tract infections caused by Chlamydia trachomatis, and vaccine-induced immunity.
Resumo:
Background The majority of peptide bonds in proteins are found to occur in the trans conformation. However, for proline residues, a considerable fraction of Prolyl peptide bonds adopt the cis form. Proline cis/trans isomerization is known to play a critical role in protein folding, splicing, cell signaling and transmembrane active transport. Accurate prediction of proline cis/trans isomerization in proteins would have many important applications towards the understanding of protein structure and function. Results In this paper, we propose a new approach to predict the proline cis/trans isomerization in proteins using support vector machine (SVM). The preliminary results indicated that using Radial Basis Function (RBF) kernels could lead to better prediction performance than that of polynomial and linear kernel functions. We used single sequence information of different local window sizes, amino acid compositions of different local sequences, multiple sequence alignment obtained from PSI-BLAST and the secondary structure information predicted by PSIPRED. We explored these different sequence encoding schemes in order to investigate their effects on the prediction performance. The training and testing of this approach was performed on a newly enlarged dataset of 2424 non-homologous proteins determined by X-Ray diffraction method using 5-fold cross-validation. Selecting the window size 11 provided the best performance for determining the proline cis/trans isomerization based on the single amino acid sequence. It was found that using multiple sequence alignments in the form of PSI-BLAST profiles could significantly improve the prediction performance, the prediction accuracy increased from 62.8% with single sequence to 69.8% and Matthews Correlation Coefficient (MCC) improved from 0.26 with single local sequence to 0.40. Furthermore, if coupled with the predicted secondary structure information by PSIPRED, our method yielded a prediction accuracy of 71.5% and MCC of 0.43, 9% and 0.17 higher than the accuracy achieved based on the singe sequence information, respectively. Conclusion A new method has been developed to predict the proline cis/trans isomerization in proteins based on support vector machine, which used the single amino acid sequence with different local window sizes, the amino acid compositions of local sequence flanking centered proline residues, the position-specific scoring matrices (PSSMs) extracted by PSI-BLAST and the predicted secondary structures generated by PSIPRED. The successful application of SVM approach in this study reinforced that SVM is a powerful tool in predicting proline cis/trans isomerization in proteins and biological sequence analysis.
Resumo:
It was reported that the manuscript of Crash was returned to the publisher with a note reading ‘The author is beyond psychiatric help’. Ballard took the lay diagnosis as proof of complete artistic success. Crash conflates the Freudian tropes of libido and thanatos, overlaying these onto the twentieth century erotic icon, the car. Beyond mere incompetent adolescent copulatory fumblings in the back seat of the parental sedan or the clichéd phallic locomotor of the mid-life Ferrari, Ballard engages the full potentialities of the automobile as the locus and sine qua non of a perverse, though functional erotic. ‘Autoeroticism’ is transformed into automotive, traumatic or surgical paraphilia, driving Helmut Newton’s insipid photo-essays of BDSM and orthopædics into an entirely new dimension, dancing precisely where (but more crucially, because) the ‘body is bruised to pleasure soul’. The serendipity of quotidian accidental collisions is supplanted, in pursuit of the fetishised object, by contrived (though not simulated) recreations of iconographic celebrity deaths. Penetration remains as a guiding trope of sexuality, but it is confounded by a perversity of focus. Such an obsessive pursuit of this autoerotic-as-reality necessitates the rejection of the law of human sexual regulation, requiring the re-interpretation of what constitutes sex itself by looking beyond or through conventional sexuality into Ballard’s paraphiliac and nightmarish consensual Other. This Other allows for (if not demands) the tangled wreckage of a sportscar to function as a transformative sexual agent, creating, of woman, a being of ‘free and perverse sexuality, releasing within its dying chromium and leaking engine-parts, all the deviant possibilities of her sex’.
Resumo:
Handling information overload online, from the user's point of view is a big challenge, especially when the number of websites is growing rapidly due to growth in e-commerce and other related activities. Personalization based on user needs is the key to solving the problem of information overload. Personalization methods help in identifying relevant information, which may be liked by a user. User profile and object profile are the important elements of a personalization system. When creating user and object profiles, most of the existing methods adopt two-dimensional similarity methods based on vector or matrix models in order to find inter-user and inter-object similarity. Moreover, for recommending similar objects to users, personalization systems use the users-users, items-items and users-items similarity measures. In most cases similarity measures such as Euclidian, Manhattan, cosine and many others based on vector or matrix methods are used to find the similarities. Web logs are high-dimensional datasets, consisting of multiple users, multiple searches with many attributes to each. Two-dimensional data analysis methods may often overlook latent relationships that may exist between users and items. In contrast to other studies, this thesis utilises tensors, the high-dimensional data models, to build user and object profiles and to find the inter-relationships between users-users and users-items. To create an improved personalized Web system, this thesis proposes to build three types of profiles: individual user, group users and object profiles utilising decomposition factors of tensor data models. A hybrid recommendation approach utilising group profiles (forming the basis of a collaborative filtering method) and object profiles (forming the basis of a content-based method) in conjunction with individual user profiles (forming the basis of a model based approach) is proposed for making effective recommendations. A tensor-based clustering method is proposed that utilises the outcomes of popular tensor decomposition techniques such as PARAFAC, Tucker and HOSVD to group similar instances. An individual user profile, showing the user's highest interest, is represented by the top dimension values, extracted from the component matrix obtained after tensor decomposition. A group profile, showing similar users and their highest interest, is built by clustering similar users based on tensor decomposed values. A group profile is represented by the top association rules (containing various unique object combinations) that are derived from the searches made by the users of the cluster. An object profile is created to represent similar objects clustered on the basis of their similarity of features. Depending on the category of a user (known, anonymous or frequent visitor to the website), any of the profiles or their combinations is used for making personalized recommendations. A ranking algorithm is also proposed that utilizes the personalized information to order and rank the recommendations. The proposed methodology is evaluated on data collected from a real life car website. Empirical analysis confirms the effectiveness of recommendations made by the proposed approach over other collaborative filtering and content-based recommendation approaches based on two-dimensional data analysis methods.
Resumo:
Problem solving courts appear to achieve outcomes which are not common in mainstream courts. There are increasing calls for the adoption of more “therapeutic” and “problem solving” practices by mainstream judges in civil and criminal courts in a number of jurisdictions, most notably in the United States and Australia. Currently, a judge who sets out to exercise a significant therapeutic function is quite likely to be doing so in a specialist court or jurisdiction, outside the mainstream court system, and, arguably, from outside the adversarial paradigm itself. To some extent, his work is tolerated but marginalized. But do therapeutic and problem solving functions have the potential to define, rather than complement, the role of judicial officers? The basic question addressed in this paper is, therefore, whether the judicial role could evolve to be not just less adversarial, but fundamentally non-adversarial. In other words, could we see--or are we seeing--a paradigm shift not just in the colloquial, casual sense of the word, but in the strong, worldview changing sense meant by Thomas Kuhn?