984 resultados para Prescribed mean-curvature problem
Resumo:
For executing the activities of a project, one or several resources are required, which are in general scarce. Many resource-allocation methods assume that the usage of these resources by an activity is constant during execution; in practice, however, the project manager may vary resource usage by individual activities over time within prescribed bounds. This variation gives rise to the project scheduling problem which consists in allocating the scarce resources to the project activities over time such that the project duration is minimized, the total number of resource units allocated equals the prescribed work content of each activity, and precedence and various work-content-related constraints are met.
Resumo:
The objective of this paper is to design a path following control system for a car-like mobile robot using classical linear control techniques, so that it adapts on-line to varying conditions during the trajectory following task. The main advantages of the proposed control structure is that well known linear control theory can be applied in calculating the PID controllers to full control requirements, while at the same time it is exible to be applied in non-linear changing conditions of the path following task. For this purpose the Frenet frame kinematic model of the robot is linearised at a varying working point that is calculated as a function of the actual velocity, the path curvature and kinematic parameters of the robot, yielding a transfer function that varies during the trajectory. The proposed controller is formed by a combination of an adaptive PID and a feed-forward controller, which varies accordingly with the working conditions and compensates the non-linearity of the system. The good features and exibility of the proposed control structure have been demonstrated through realistic simulations that include both kinematics and dynamics of the car-like robot.
Resumo:
Se describe el problema del hinchamiento del hormigón en las presas de doble curvatura. Several chemical reactions are able to produce swelling of concrete for decades after its initial curing, a problem that affects a considerable number of concrete dams around the world. The object of the work reported is to simulate the underlying mechanisms with sufficient accuracy to reproduce the past history and to predict the future evolution reliably. Having studied the available formulations, that considered to be more promising was adopted and introduced via user routines in a commercial finite element code. It is a non isotropic swelling model,compatible with the cracking and other non-linearities displayed by the concrete. The paper concentrates on the work conducted for a double-curvature arch dam. The model parameters were determined on the basis of some parts of the dam’s monitored histories, reliability was then verified using other parts and, finally, predictions were made about the future evolution of the dam and its safety margin.
Resumo:
In this work, we show how number theoretical problems can be fruitfully approached with the tools of statistical physics. We focus on g-Sidon sets, which describe sequences of integers whose pairwise sums are different, and propose a random decision problem which addresses the probability of a random set of k integers to be g-Sidon. First, we provide numerical evidence showing that there is a crossover between satisfiable and unsatisfiable phases which converts to an abrupt phase transition in a properly defined thermodynamic limit. Initially assuming independence, we then develop a mean-field theory for the g-Sidon decision problem. We further improve the mean-field theory, which is only qualitatively correct, by incorporating deviations from independence, yielding results in good quantitative agreement with the numerics for both finite systems and in the thermodynamic limit. Connections between the generalized birthday problem in probability theory, the number theory of Sidon sets and the properties of q-Potts models in condensed matter physics are briefly discussed
Resumo:
In this article, an approximate analytical solution for the two body problem perturbed by a radial, low acceleration is obtained, using a regularized formulation of the orbital motion and the method of multiple scales. The results reveal that the physics of the problem evolve in two fundamental scales of the true anomaly. The first one drives the oscillations of the orbital parameters along each orbit. The second one is responsible of the long-term variations in the amplitude and mean values of these oscillations. A good agreement is found with high precision numerical solutions.
Resumo:
Fresnel lenses used as primary optics in concentrating photovoltaic modules may show warping produced by lens manufacturing or module assembly (e.g., stress during molding or weight load) or due to stress during operation (e.g., mismatch of thermal expansion between different materials). To quantify this problem, a simple method called “checkerboard method” is presented. The proposed method identifies shape errors on the front surface of primary lenses by analyzing the Fresnel reflections. This paper also deals with the quantification of the effects these curvatures have on their optical performance and on the electrical performance of concentrating modules incorporating them. This method can be used to perform quality control of Fresnel lenses in scenarios of high volume production.
Resumo:
Few studies have documented the response of gravitropically curved organs to a withdrawal of a constant gravitational stimulus. The effects of stimulus withdrawal on gravitropic curvature were studied by following individual roots of cress (Lepidium sativum L.) through reorientation and clinostat rotation. Roots turned to the horizontal curved down 62° and 88° after 1 and 5 h, respectively. Subsequent rotation on a clinostat for 6 h resulted in root straightening through a loss of gravitropic curvature in older regions and through new growth becoming aligned closer to the prestimulus vertical. However, these roots did not return completely to the prestimulus vertical, indicating the retention of some gravitropic response. Clinostat rotation shifted the mean root angle −36° closer to the prestimulus vertical, regardless of the duration of prior horizontal stimulation. Control roots (no horizontal stimulation) were slanted at various angles after clinostat rotation. These findings indicate that gravitropic curvature is not necessarily permanent, and that the root retains some commitment to its equilibrium orientation prior to gravitropic stimulation.
Resumo:
The political activity and growing independence of Chechnya’s leader Ramzan Kadyrov raises questions about his loyalty and the possibility of his openly renouncing his servitude to Moscow. Such a scenario seems unlikely because of the dependence of Kadyrov’s regime on Russia. He is burdened by his republic’s financial dependence, the stain of collaboration and the crimes committed on his own people, and so his regime cannot exist without Moscow’s support. However, Kadyrov’s dependence on Moscow and the apparent stability of the situation in Chechnya do not mean that a lasting peace has been established there. The current plan for governing the republic and the relationship between Moscow and Grozny is a temporary solution, based not on durable solutions, but on the situational convergence of the Kremlin and Kadyrov’s interests. A change of government in the Kremlin, or to an even greater degree a domestic crisis in Russia which weakens its position in the Caucasus, would mean the fall of Kadyrov’s regime, and the reactivation of pro-independence rhetoric in Chechnya.
Resumo:
In this paper we consider the exterior Neumann problem involving a critical Sobolev exponent. We establish the existence of two solutions having a prescribed limit at infinity.
Resumo:
A formalism for describing the dynamics of Genetic Algorithms (GAs) using method s from statistical mechanics is applied to the problem of generalization in a perceptron with binary weights. The dynamics are solved for the case where a new batch of training patterns is presented to each population member each generation, which considerably simplifies the calculation. The theory is shown to agree closely to simulations of a real GA averaged over many runs, accurately predicting the mean best solution found. For weak selection and large problem size the difference equations describing the dynamics can be expressed analytically and we find that the effects of noise due to the finite size of each training batch can be removed by increasing the population size appropriately. If this population resizing is used, one can deduce the most computationally efficient size of training batch each generation. For independent patterns this choice also gives the minimum total number of training patterns used. Although using independent patterns is a very inefficient use of training patterns in general, this work may also prove useful for determining the optimum batch size in the case where patterns are recycled.
Resumo:
A major problem in modern probabilistic modeling is the huge computational complexity involved in typical calculations with multivariate probability distributions when the number of random variables is large. Because exact computations are infeasible in such cases and Monte Carlo sampling techniques may reach their limits, there is a need for methods that allow for efficient approximate computations. One of the simplest approximations is based on the mean field method, which has a long history in statistical physics. The method is widely used, particularly in the growing field of graphical models. Researchers from disciplines such as statistical physics, computer science, and mathematical statistics are studying ways to improve this and related methods and are exploring novel application areas. Leading approaches include the variational approach, which goes beyond factorizable distributions to achieve systematic improvements; the TAP (Thouless-Anderson-Palmer) approach, which incorporates correlations by including effective reaction terms in the mean field theory; and the more general methods of graphical models. Bringing together ideas and techniques from these diverse disciplines, this book covers the theoretical foundations of advanced mean field methods, explores the relation between the different approaches, examines the quality of the approximation obtained, and demonstrates their application to various areas of probabilistic modeling.
Resumo:
Measurements (autokeratometry, A-scan ultrasonography and video ophthalmophakometry) of ocular surface radii, axial separations and alignment were made in the horizontal meridian of nine emmetropes (aged 20-38 years) with relaxed (cycloplegia) and active accommodation (mean ± 95% confidence interval: 3.7 ± 1.1 D). The anterior chamber depth (-1.5 ± 0.3 D) and both crystalline lens surfaces (front 3.1 ± 0.8 D; rear 2.1 ± 0.6 D) contributed to dioptric vergence changes that accompany accommodation. Accommodation did not alter ocular surface alignment. Ocular misalignment in relaxed eyes is mainly because of eye rotation (5.7 ± 1.6° temporally) with small amounts of lens tilt (0.2 ± 0.8° temporally) and decentration (0.1 ± 0.1 mm nasally) but these results must be viewed with caution as we did not account for corneal asymmetry. Comparison of calculated and empirically derived coefficients (upon which ocular surface alignment calculations depend) revealed that negligible inherent errors arose from neglect of ocular surface asphericity, lens gradient refractive index properties, surface astigmatism, effects of pupil size and centration, assumed eye rotation axis position and use of linear equations for analysing Purkinje image shifts. © 2004 The College of Optometrists.
Resumo:
Purpose. The purpose of this study was to investigate the influence of corneal topography and thickness on intraocular pressure (IOP) and pulse amplitude (PA) as measured using the Ocular Blood Flow Analyzer (OBFA) pneumatonometer (Paradigm Medical Industries, Utah, USA). Methods. 47 university students volunteered for this cross-sectional study: mean age 20.4 yrs, range 18 to 28 yrs; 23 male, 24 female. Only the measurements from the right eye of each participant were used. Central corneal thickness and mean corneal radius were measured using Scheimpflug biometry and corneal topographic imaging respectively. IOP and PA measurements were made with the OBFA pneumatonometer. Axial length was measured using A-scan ultrasound, due to its known correlation with these corneal parameters. Stepwise multiple regression analysis was used to identify those components that contributed significant variance to the independent variables of IOP and PA. Results. The mean IOP and PA measurements were 13.1 (SD 3.3) mmHg and 3.0 (SD 1.2) mmHg respectively. IOP measurements made with the OBFA pneumatonometer correlated significantly with central corneal thickness (r = +0.374, p = 0.010), such that a 10 mm change in CCT was equivalent to a 0.30 mmHg change in measured IOP. PA measurements correlated significantly with axial length (part correlate = -0.651, p < 0.001) and mean corneal radius (part correlate = +0.459, p < 0.001) but not corneal thickness. Conclusions. IOP measurements taken with the OBFA pneumatonometer are correlated with corneal thickness, but not axial length or corneal curvature. Conversely, PA measurements are unaffected by corneal thickness, but correlated with axial length and corneal radius. These parameters should be taken into consideration when interpreting IOP and PA measurements made with the OBFA pneumatonometer.
Resumo:
Public values are moving from a research concern to policy discourse and management practice. There are, though, different readings of what public values actually mean. Reflection suggests two distinct strands of thinking: a generative strand that sees public value emerging from processes of public debate; and an institutional interpretation that views public values as the attributes of government producers. Neither perspective seems to offer a persuasive account of how the public gains from strengthened public values. Key propositions on values are generated from comparison of influential texts. A provisional framework is presented of the values base of public institutions and the loosely coupled public propositions flowing from these values. Value propositions issue from different governing contexts, which are grouped into policy frames that then compete with other problem frames for citizens’ cognitive resources. Vital democratic commitments to pluralism require public values to be distributed in competition with other, respected, frames.
Resumo:
The concept of plagiarism is not uncommonly associated with the concept of intellectual property, both for historical and legal reasons: the approach to the ownership of ‘moral’, nonmaterial goods has evolved to the right to individual property, and consequently a need was raised to establish a legal framework to cope with the infringement of those rights. The solution to plagiarism therefore falls most often under two categories: ethical and legal. On the ethical side, education and intercultural studies have addressed plagiarism critically, not only as a means to improve academic ethics policies (PlagiarismAdvice.org, 2008), but mainly to demonstrate that if anything the concept of plagiarism is far from being universal (Howard & Robillard, 2008). Even if differently, Howard (1995) and Scollon (1994, 1995) argued, and Angèlil-Carter (2000) and Pecorari (2008) later emphasised that the concept of plagiarism cannot be studied on the grounds that one definition is clearly understandable by everyone. Scollon (1994, 1995), for example, claimed that authorship attribution is particularly a problem in non-native writing in English, and so did Pecorari (2008) in her comprehensive analysis of academic plagiarism. If among higher education students plagiarism is often a problem of literacy, with prior, conflicting social discourses that may interfere with academic discourse, as Angèlil-Carter (2000) demonstrates, we then have to aver that a distinction should be made between intentional and inadvertent plagiarism: plagiarism should be prosecuted when intentional, but if it is part of the learning process and results from the plagiarist’s unfamiliarity with the text or topic it should be considered ‘positive plagiarism’ (Howard, 1995: 796) and hence not an offense. Determining the intention behind the instances of plagiarism therefore determines the nature of the disciplinary action adopted. Unfortunately, in order to demonstrate the intention to deceive and charge students with accusations of plagiarism, teachers necessarily have to position themselves as ‘plagiarism police’, although it has been argued otherwise (Robillard, 2008). Practice demonstrates that in their daily activities teachers will find themselves being required a command of investigative skills and tools that they most often lack. We thus claim that the ‘intention to deceive’ cannot inevitably be dissociated from plagiarism as a legal issue, even if Garner (2009) asserts that generally plagiarism is immoral but not illegal, and Goldstein (2003) makes the same severance. However, these claims, and the claim that only cases of copyright infringement tend to go to court, have recently been challenged, mainly by forensic linguists, who have been actively involved in cases of plagiarism. Turell (2008), for instance, demonstrated that plagiarism is often connoted with an illegal appropriation of ideas. Previously, she (Turell, 2004) had demonstrated by comparison of four translations of Shakespeare’s Julius Caesar to Spanish that the use of linguistic evidence is able to demonstrate instances of plagiarism. This challenge is also reinforced by practice in international organisations, such as the IEEE, to whom plagiarism potentially has ‘severe ethical and legal consequences’ (IEEE, 2006: 57). What plagiarism definitions used by publishers and organisations have in common – and which the academia usually lacks – is their focus on the legal nature. We speculate that this is due to the relation they intentionally establish with copyright laws, whereas in education the focus tends to shift from the legal to the ethical aspects. However, the number of plagiarism cases taken to court is very small, and jurisprudence is still being developed on the topic. In countries within the Civil Law tradition, Turell (2008) claims, (forensic) linguists are seldom called upon as expert witnesses in cases of plagiarism, either because plagiarists are rarely taken to court or because there is little tradition of accepting linguistic evidence. In spite of the investigative and evidential potential of forensic linguistics to demonstrate the plagiarist’s intention or otherwise, this potential is restricted by the ability to identify a text as being suspect of plagiarism. In an era with such a massive textual production, ‘policing’ plagiarism thus becomes an extraordinarily difficult task without the assistance of plagiarism detection systems. Although plagiarism detection has attracted the attention of computer engineers and software developers for years, a lot of research is still needed. Given the investigative nature of academic plagiarism, plagiarism detection has of necessity to consider not only concepts of education and computational linguistics, but also forensic linguistics. Especially, if intended to counter claims of being a ‘simplistic response’ (Robillard & Howard, 2008). In this paper, we use a corpus of essays written by university students who were accused of plagiarism, to demonstrate that a forensic linguistic analysis of improper paraphrasing in suspect texts has the potential to identify and provide evidence of intention. A linguistic analysis of the corpus texts shows that the plagiarist acts on the paradigmatic axis to replace relevant lexical items with a related word from the same semantic field, i.e. a synonym, a subordinate, a superordinate, etc. In other words, relevant lexical items were replaced with related, but not identical, ones. Additionally, the analysis demonstrates that the word order is often changed intentionally to disguise the borrowing. On the other hand, the linguistic analysis of linking and explanatory verbs (i.e. referencing verbs) and prepositions shows that these have the potential to discriminate instances of ‘patchwriting’ and instances of plagiarism. This research demonstrates that the referencing verbs are borrowed from the original in an attempt to construct the new text cohesively when the plagiarism is inadvertent, and that the plagiarist has made an effort to prevent the reader from identifying the text as plagiarism, when it is intentional. In some of these cases, the referencing elements prove being able to identify direct quotations and thus ‘betray’ and denounce plagiarism. Finally, we demonstrate that a forensic linguistic analysis of these verbs is critical to allow detection software to identify them as proper paraphrasing and not – mistakenly and simplistically – as plagiarism.