937 resultados para Improper Partial Semi-Bilateral Generating Function


Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, we introduce a formula for the exact number of zeros of every partial sum of the Riemann zeta function inside infinitely many rectangles of the critical strips where they are situated.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Remez penalty and smoothing algorithm (RPSALG) is a unified framework for penalty and smoothing methods for solving min-max convex semi-infinite programing problems, whose convergence was analyzed in a previous paper of three of the authors. In this paper we consider a partial implementation of RPSALG for solving ordinary convex semi-infinite programming problems. Each iteration of RPSALG involves two types of auxiliary optimization problems: the first one consists of obtaining an approximate solution of some discretized convex problem, while the second one requires to solve a non-convex optimization problem involving the parametric constraints as objective function with the parameter as variable. In this paper we tackle the latter problem with a variant of the cutting angle method called ECAM, a global optimization procedure for solving Lipschitz programming problems. We implement different variants of RPSALG which are compared with the unique publicly available SIP solver, NSIPS, on a battery of test problems.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A hard combinatorial problem is investigated which has useful application in design of discrete devices: the two-block decomposition of a partial Boolean function. The key task is regarded: finding such a weak partition on the set of arguments, at which the considered function can be decomposed. Solving that task is essentially speeded up by the way of preliminary discovering traces of the sought-for partition. Efficient combinatorial operations are used by that, based on parallel execution of operations above adjacent units in the Boolean space.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We consider the existence and uniqueness problem for partial differential-functional equations of the first order with the initial condition for which the right-hand side depends on the derivative of unknown function with deviating argument.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Николай Кутев, Величка Милушева - Намираме експлицитно всичките би-омбилични фолирани полусиметрични повърхнини в четиримерното евклидово пространство R^4

Relevância:

40.00% 40.00%

Publicador:

Resumo:

PURPOSE: To quantitatively evaluate visual function 12 months after bilateral implantation of the Physiol FineVision® trifocal intraocular lens (IOL) and to compare these results with those obtained in the first postoperative month. METHODS: In this prospective case series, 20 eyes of 10 consecutive patients were included. Monocular and binocular, uncorrected and corrected visual acuities (distance, near, and intermediate) were measured. Metrovision® was used to test contrast sensitivity under static and dynamic conditions, both in photopic and low-mesopic settings. The same software was used for pupillometry and glare evaluation. Motion, achromatic, and chromatic contrast discrimination were tested using 2 innovative psychophysical tests. A complete ophthalmologic examination was performed preoperatively and at 1, 3, 6, and 12 months postoperatively. Psychophysical tests were performed 1 month after surgery and repeated 12 months postoperatively. RESULTS: Final distance uncorrected visual acuity (VA) was 0.00 ± 0.08 and distance corrected VA was 0.00 ± 0.05 logMAR. Distance corrected near VA was 0.00 ± 0.09 and distance corrected intermediate VA was 0.00 ± 0.06 logMAR. Glare testing, pupillometry, contrast sensitivity, motion, and chromatic and achromatic contrast discrimination did not differ significantly between the first and last visit (p>0.05) or when compared to an age-matched control group (p>0.05). CONCLUSIONS: The Physiol FineVision® trifocal IOL provided satisfactory full range of vision and quality of vision parameters 12 months after surgery. Visual acuity and psychophysical tests did not vary significantly between the first and last visit.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

PURPOSE: To quantitatively evaluate visual function 12 months after bilateral implantation of the Physiol FineVision® trifocal intraocular lens (IOL) and to compare these results with those obtained in the first postoperative month. METHODS: In this prospective case series, 20 eyes of 10 consecutive patients were included. Monocular and binocular, uncorrected and corrected visual acuities (distance, near, and intermediate) were measured. Metrovision® was used to test contrast sensitivity under static and dynamic conditions, both in photopic and low-mesopic settings. The same software was used for pupillometry and glare evaluation. Motion, achromatic, and chromatic contrast discrimination were tested using 2 innovative psychophysical tests. A complete ophthalmologic examination was performed preoperatively and at 1, 3, 6, and 12 months postoperatively. Psychophysical tests were performed 1 month after surgery and repeated 12 months postoperatively. RESULTS: Final distance uncorrected visual acuity (VA) was 0.00 ± 0.08 and distance corrected VA was 0.00 ± 0.05 logMAR. Distance corrected near VA was 0.00 ± 0.09 and distance corrected intermediate VA was 0.00 ± 0.06 logMAR. Glare testing, pupillometry, contrast sensitivity, motion, and chromatic and achromatic contrast discrimination did not differ significantly between the first and last visit (p>0.05) or when compared to an age-matched control group (p>0.05). CONCLUSIONS: The Physiol FineVision® trifocal IOL provided satisfactory full range of vision and quality of vision parameters 12 months after surgery. Visual acuity and psychophysical tests did not vary significantly between the first and last visit.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based on Newmark-β method, a structural vibration response is predicted. Through finding the appropriate control force parameters within certain ranges to optimize the objective function, the predictive control of the structural vibration is achieved. At the same time, the numerical simulation analysis of a two-storey frame structure with magneto-rheological (MR) dampers under earthquake records is carried out, and the parameter influence on structural vibration reduction is discussed. The results demonstrate that the semi-active control based on Newmark-β predictive algorithm is better than the classical control strategy based on full-state feedback control and has remarkable advantages of structural vibration reduction and control robustness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Low back pain is an increasing problem in industrialised countries and although it is a major socio-economic problem in terms of medical costs and lost productivity, relatively little is known about the processes underlying the development of the condition. This is in part due to the complex interactions between bone, muscle, nerves and other soft tissues of the spine, and the fact that direct observation and/or measurement of the human spine is not possible using non-invasive techniques. Biomechanical models have been used extensively to estimate the forces and moments experienced by the spine. These models provide a means of estimating the internal parameters which can not be measured directly. However, application of most of the models currently available is restricted to tasks resembling those for which the model was designed due to the simplified representation of the anatomy. The aim of this research was to develop a biomechanical model to investigate the changes in forces and moments which are induced by muscle injury. In order to accurately simulate muscle injuries a detailed quasi-static three dimensional model representing the anatomy of the lumbar spine was developed. This model includes the nine major force generating muscles of the region (erector spinae, comprising the longissimus thoracis and iliocostalis lumborum; multifidus; quadratus lumborum; latissimus dorsi; transverse abdominis; internal oblique and external oblique), as well as the thoracolumbar fascia through which the transverse abdominis and parts of the internal oblique and latissimus dorsi muscles attach to the spine. The muscles included in the model have been represented using 170 muscle fascicles each having their own force generating characteristics and lines of action. Particular attention has been paid to ensuring the muscle lines of action are anatomically realistic, particularly for muscles which have broad attachments (e.g. internal and external obliques), muscles which attach to the spine via the thoracolumbar fascia (e.g. transverse abdominis), and muscles whose paths are altered by bony constraints such as the rib cage (e.g. iliocostalis lumborum pars thoracis and parts of the longissimus thoracis pars thoracis). In this endeavour, a separate sub-model which accounts for the shape of the torso by modelling it as a series of ellipses has been developed to model the lines of action of the oblique muscles. Likewise, a separate sub-model of the thoracolumbar fascia has also been developed which accounts for the middle and posterior layers of the fascia, and ensures that the line of action of the posterior layer is related to the size and shape of the erector spinae muscle. Published muscle activation data are used to enable the model to predict the maximum forces and moments that may be generated by the muscles. These predictions are validated against published experimental studies reporting maximum isometric moments for a variety of exertions. The model performs well for fiexion, extension and lateral bend exertions, but underpredicts the axial twist moments that may be developed. This discrepancy is most likely the result of differences between the experimental methodology and the modelled task. The application of the model is illustrated using examples of muscle injuries created by surgical procedures. The three examples used represent a posterior surgical approach to the spine, an anterior approach to the spine and uni-lateral total hip replacement surgery. Although the three examples simulate different muscle injuries, all demonstrate the production of significant asymmetrical moments and/or reduced joint compression following surgical intervention. This result has implications for patient rehabilitation and the potential for further injury to the spine. The development and application of the model has highlighted a number of areas where current knowledge is deficient. These include muscle activation levels for tasks in postures other than upright standing, changes in spinal kinematics following surgical procedures such as spinal fusion or fixation, and a general lack of understanding of how the body adjusts to muscle injuries with respect to muscle activation patterns and levels, rate of recovery from temporary injuries and compensatory actions by other muscles. Thus the comprehensive and innovative anatomical model which has been developed not only provides a tool to predict the forces and moments experienced by the intervertebral joints of the spine, but also highlights areas where further clinical research is required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, we investigate an alternative bootstrap approach based on a result of Ramsey [F.L. Ramsey, Characterization of the partial autocorrelation function, Ann. Statist. 2 (1974), pp. 1296-1301] and on the Durbin-Levinson algorithm to obtain a surrogate series from linear Gaussian processes with long range dependence. We compare this bootstrap method with other existing procedures in a wide Monte Carlo experiment by estimating, parametrically and semi-parametrically, the memory parameter d. We consider Gaussian and non-Gaussian processes to prove the robustness of the method to deviations from normality. The approach is also useful to estimate confidence intervals for the memory parameter d by improving the coverage level of the interval.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gradient-based approaches to direct policy search in reinforcement learning have received much recent attention as a means to solve problems of partial observability and to avoid some of the problems associated with policy degradation in value-function methods. In this paper we introduce GPOMDP, a simulation-based algorithm for generating a biased estimate of the gradient of the average reward in Partially Observable Markov Decision Processes (POMDPs) controlled by parameterized stochastic policies. A similar algorithm was proposed by Kimura, Yamamura, and Kobayashi (1995). The algorithm's chief advantages are that it requires storage of only twice the number of policy parameters, uses one free parameter β ∈ [0,1) (which has a natural interpretation in terms of bias-variance trade-off), and requires no knowledge of the underlying state. We prove convergence of GPOMDP, and show how the correct choice of the parameter β is related to the mixing time of the controlled POMDP. We briefly describe extensions of GPOMDP to controlled Markov chains, continuous state, observation and control spaces, multiple-agents, higher-order derivatives, and a version for training stochastic policies with internal states. In a companion paper (Baxter, Bartlett, & Weaver, 2001) we show how the gradient estimates generated by GPOMDP can be used in both a traditional stochastic gradient algorithm and a conjugate-gradient procedure to find local optima of the average reward. ©2001 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive definite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space -- classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semi-definite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -- using the labelled part of the data one can learn an embedding also for the unlabelled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method to learn the 2-norm soft margin parameter in support vector machines, solving another important open problem. Finally, the novel approach presented in the paper is supported by positive empirical results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A magneto-rheological (MR) fluid damper is a semi-active control device that has recently begun to receive more attention in the vibration control community. However, the inherent nonlinear nature of the MR fluid damper makes it challenging to use this device to achieve high damping control system performance. Therefore the development of an accurate modeling method for a MR fluid damper is necessary to take advantage of its unique characteristics. Our goal was to develop an alternative method for modeling a MR fluid damper by using a self tuning fuzzy (STF) method based on neural technique. The behavior of the researched damper is directly estimated through a fuzzy mapping system. In order to improve the accuracy of the STF model, a back propagation and a gradient descent method are used to train online the fuzzy parameters to minimize the model error function. A series of simulations had been done to validate the effectiveness of the suggested modeling method when compared with the data measured from experiments on a test rig with a researched MR fluid damper. Finally, modeling results show that the proposed STF interference system trained online by using neural technique could describe well the behavior of the MR fluid damper without need of calculation time for generating the model parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traffic generated semi and non volatile organic compounds (SVOCs and NVOCs) pose a serious threat to human and ecosystem health when washed off into receiving water bodies by stormwater. Climate change influenced rainfall characteristics makes the estimation of these pollutants in stormwater quite complex. The research study discussed in the paper developed a prediction framework for such pollutants under the dynamic influence of climate change on rainfall characteristics. It was established through principal component analysis (PCA) that the intensity and durations of low to moderate rain events induced by climate change mainly affect the wash-off of SVOCs and NVOCs from urban roads. The study outcomes were able to overcome the limitations of stringent laboratory preparation of calibration matrices by extracting uncorrelated underlying factors in the data matrices through systematic application of PCA and factor analysis (FA). Based on the initial findings from PCA and FA, the framework incorporated orthogonal rotatable central composite experimental design to set up calibration matrices and partial least square regression to identify significant variables in predicting the target SVOCs and NVOCs in four particulate fractions ranging from >300-1 μm and one dissolved fraction of <1 μm. For the particulate fractions range >300-1 μm, similar distributions of predicted and observed concentrations of the target compounds from minimum to 75th percentile were achieved. The inter-event coefficient of variations for particulate fractions of >300-1 μm were 5% to 25%. The limited solubility of the target compounds in stormwater restricted the predictive capacity of the proposed method for the dissolved fraction of <1 μm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis develops a detailed conceptual design method and a system software architecture defined with a parametric and generative evolutionary design system to support an integrated interdisciplinary building design approach. The research recognises the need to shift design efforts toward the earliest phases of the design process to support crucial design decisions that have a substantial cost implication on the overall project budget. The overall motivation of the research is to improve the quality of designs produced at the author's employer, the General Directorate of Major Works (GDMW) of the Saudi Arabian Armed Forces. GDMW produces many buildings that have standard requirements, across a wide range of environmental and social circumstances. A rapid means of customising designs for local circumstances would have significant benefits. The research considers the use of evolutionary genetic algorithms in the design process and the ability to generate and assess a wider range of potential design solutions than a human could manage. This wider ranging assessment, during the early stages of the design process, means that the generated solutions will be more appropriate for the defined design problem. The research work proposes a design method and system that promotes a collaborative relationship between human creativity and the computer capability. The tectonic design approach is adopted as a process oriented design that values the process of design as much as the product. The aim is to connect the evolutionary systems to performance assessment applications, which are used as prioritised fitness functions. This will produce design solutions that respond to their environmental and function requirements. This integrated, interdisciplinary approach to design will produce solutions through a design process that considers and balances the requirements of all aspects of the design. Since this thesis covers a wide area of research material, 'methodological pluralism' approach was used, incorporating both prescriptive and descriptive research methods. Multiple models of research were combined and the overall research was undertaken following three main stages, conceptualisation, developmental and evaluation. The first two stages lay the foundations for the specification of the proposed system where key aspects of the system that have not previously been proven in the literature, were implemented to test the feasibility of the system. As a result of combining the existing knowledge in the area with the newlyverified key aspects of the proposed system, this research can form the base for a future software development project. The evaluation stage, which includes building the prototype system to test and evaluate the system performance based on the criteria defined in the earlier stage, is not within the scope this thesis. The research results in a conceptual design method and a proposed system software architecture. The proposed system is called the 'Hierarchical Evolutionary Algorithmic Design (HEAD) System'. The HEAD system has shown to be feasible through the initial illustrative paper-based simulation. The HEAD system consists of the two main components - 'Design Schema' and the 'Synthesis Algorithms'. The HEAD system reflects the major research contribution in the way it is conceptualised, while secondary contributions are achieved within the system components. The design schema provides constraints on the generation of designs, thus enabling the designer to create a wide range of potential designs that can then be analysed for desirable characteristics. The design schema supports the digital representation of the human creativity of designers into a dynamic design framework that can be encoded and then executed through the use of evolutionary genetic algorithms. The design schema incorporates 2D and 3D geometry and graph theory for space layout planning and building formation using the Lowest Common Design Denominator (LCDD) of a parameterised 2D module and a 3D structural module. This provides a bridge between the standard adjacency requirements and the evolutionary system. The use of graphs as an input to the evolutionary algorithm supports the introduction of constraints in a way that is not supported by standard evolutionary techniques. The process of design synthesis is guided as a higher level description of the building that supports geometrical constraints. The Synthesis Algorithms component analyses designs at four levels, 'Room', 'Layout', 'Building' and 'Optimisation'. At each level multiple fitness functions are embedded into the genetic algorithm to target the specific requirements of the relevant decomposed part of the design problem. Decomposing the design problem to allow for the design requirements of each level to be dealt with separately and then reassembling them in a bottom up approach reduces the generation of non-viable solutions through constraining the options available at the next higher level. The iterative approach, in exploring the range of design solutions through modification of the design schema as the understanding of the design problem improves, assists in identifying conflicts in the design requirements. Additionally, the hierarchical set-up allows the embedding of multiple fitness functions into the genetic algorithm, each relevant to a specific level. This supports an integrated multi-level, multi-disciplinary approach. The HEAD system promotes a collaborative relationship between human creativity and the computer capability. The design schema component, as the input to the procedural algorithms, enables the encoding of certain aspects of the designer's subjective creativity. By focusing on finding solutions for the relevant sub-problems at the appropriate levels of detail, the hierarchical nature of the system assist in the design decision-making process.