958 resultados para Interdisciplinary epistemology
Resumo:
A high definition, finite difference time domain (HD-FDTD) method is presented in this paper. This new method allows the FDTD method to be efficiently applied over a very large frequency range including low frequencies, which are problematic for conventional FDTD methods. In the method, no alterations to the properties of either the source or the transmission media are required. The method is essentially frequency independent and has been verified against analytical solutions within the frequency range 50 Hz-1 GHz. As an example of the lower frequency range, the method has been applied to the problem of induced eddy currents in the human body resulting from the pulsed magnetic field gradients of an MRI system. The new method only requires approximately 0.3% of the source period to obtain an accurate solution. (C) 2003 Elsevier Science Inc. All rights reserved.
Resumo:
Let X and Y be Hausdorff topological vector spaces, K a nonempty, closed, and convex subset of X, C: K--> 2(Y) a point-to-set mapping such that for any x is an element of K, C(x) is a pointed, closed, and convex cone in Y and int C(x) not equal 0. Given a mapping g : K --> K and a vector valued bifunction f : K x K - Y, we consider the implicit vector equilibrium problem (IVEP) of finding x* is an element of K such that f (g(x*), y) is not an element of - int C(x) for all y is an element of K. This problem generalizes the (scalar) implicit equilibrium problem and implicit variational inequality problem. We propose the dual of the implicit vector equilibrium problem (DIVEP) and establish the equivalence between (IVEP) and (DIVEP) under certain assumptions. Also, we give characterizations of the set of solutions for (IVP) in case of nonmonotonicity, weak C-pseudomonotonicity, C-pseudomonotonicity, and strict C-pseudomonotonicity, respectively. Under these assumptions, we conclude that the sets of solutions are nonempty, closed, and convex. Finally, we give some applications of (IVEP) to vector variational inequality problems and vector optimization problems. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
In the present paper, we study the quasiequilibrium problem and generalized quasiequilibrium problem of generalized quasi-variational inequality in H-spaces by a new method. Some new equilibrium existence theorems are given. Our results are different from corresponding given results or contain some recent results as their special cases. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
Most finite element packages use the Newmark algorithm for time integration of structural dynamics. Various algorithms have been proposed to better optimize the high frequency dissipation of this algorithm. Hulbert and Chung proposed both implicit and explicit forms of the generalized alpha method. The algorithms optimize high frequency dissipation effectively, and despite recent work on algorithms that possess momentum conserving/energy dissipative properties in a non-linear context, the generalized alpha method remains an efficient way to solve many problems, especially with adaptive timestep control. However, the implicit and explicit algorithms use incompatible parameter sets and cannot be used together in a spatial partition, whereas this can be done for the Newmark algorithm, as Hughes and Liu demonstrated, and for the HHT-alpha algorithm developed from it. The present paper shows that the explicit generalized alpha method can be rewritten so that it becomes compatible with the implicit form. All four algorithmic parameters can be matched between the explicit and implicit forms. An element interface between implicit and explicit partitions can then be used, analogous to that devised by Hughes and Liu to extend the Newmark method. The stability of the explicit/implicit algorithm is examined in a linear context and found to exceed that of the explicit partition. The element partition is significantly less dissipative of intermediate frequencies than one using the HHT-alpha method. The explicit algorithm can also be rewritten so that the discrete equation of motion evaluates forces from displacements and velocities found at the predicted mid-point of a cycle. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
Subcycling, or the use of different timesteps at different nodes, can be an effective way of improving the computational efficiency of explicit transient dynamic structural solutions. The method that has been most widely adopted uses a nodal partition. extending the central difference method, in which small timestep updates are performed interpolating on the displacement at neighbouring large timestep nodes. This approach leads to narrow bands of unstable timesteps or statistical stability. It also can be in error due to lack of momentum conservation on the timestep interface. The author has previously proposed energy conserving algorithms that avoid the first problem of statistical stability. However, these sacrifice accuracy to achieve stability. An approach to conserve momentum on an element interface by adding partial velocities is considered here. Applied to extend the central difference method. this approach is simple. and has accuracy advantages. The method can be programmed by summing impulses of internal forces, evaluated using local element timesteps, in order to predict a velocity change at a node. However, it is still only statistically stable, so an adaptive timestep size is needed to monitor accuracy and to be adjusted if necessary. By replacing the central difference method with the explicit generalized alpha method. it is possible to gain stability by dissipating the high frequency response that leads to stability problems. However. coding the algorithm is less elegant, as the response depends on previous partial accelerations. Extension to implicit integration, is shown to be impractical due to the neglect of remote effects of internal forces acting across a timestep interface. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
For products sold with warranty, the warranty servicing cost can be reduced by improving product reliability through a development process. However, this increases the unit manufacturing cost. Optimal development must achieve a trade-off between these two costs. The outcome of the development process is uncertain and needs to be taken into account in the determination of the optimal development effort. The paper develops a model where this uncertainty is taken into account. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
For repairable items sold with free replacement warranty, the actions available to the manufacturer to rectify failures under warranty are to (1) repair the failed item or (2) replace it with a new one. A proper repair-replace strategy can reduce the expected cost of servicing the warranty. In this paper, we study repair-replace strategies for items sold with a two-dimensional free replacement warranty. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
This paper deals with an n-fold Weibull competing risk model. A characterisation of the WPP plot is given along with estimation of model parameters when modelling a given data set. These are illustrated through two examples. A study of the different possible shapes for the density and failure rate functions is also presented. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
A number of authors concerned with the analysis of rock jointing have used the idea that the joint areal or diametral distribution can be linked to the trace length distribution through a theorem attributed to Crofton. This brief paper seeks to demonstrate why Crofton's theorem need not be used to link moments of the trace length distribution captured by scan line or areal mapping to the moments of the diametral distribution of joints represented as disks and that it is incorrect to do so. The valid relationships for areal or scan line mapping between all the moments of the trace length distribution and those of the joint size distribution for joints modeled as disks are recalled and compared with those that might be applied were Crofton's theorem assumed to apply. For areal mapping, the relationship is fortuitously correct but incorrect for scan line mapping.
Resumo:
This paper presents a large amplitude vibration analysis of pre-stressed functionally graded material (FGM) laminated plates that are composed of a shear deformable functionally graded layer and two surface-mounted piezoelectric actuator layers. Nonlinear governing equations of motion are derived within the context of Reddy's higher-order shear deformation plate theory to account for transverse shear strain and rotary inertia. Due to the bending and stretching coupling effect, a nonlinear static problem is solved first to determine the initial stress state and pre-vibration deformations of the plate that is subjected to uniform temperature change, in-plane forces and applied actuator voltage. By adding an incremental dynamic state to the pre-vibration state, the differential equations that govern the nonlinear vibration behavior of pre-stressed FGM laminated plates are derived. A semi-analytical method that is based on one-dimensional differential quadrature and Galerkin technique is proposed to predict the large amplitude vibration behavior of the laminated rectangular plates with two opposite clamped edges. Linear vibration frequencies and nonlinear normalized frequencies are presented in both tabular and graphical forms, showing that the normalized frequency of the FGM laminated plate is very sensitive to vibration amplitude, out-of-plane boundary support, temperature change, in-plane compression and the side-to-thickness ratio. The CSCF and CFCF plates even change the inherent hard-spring characteristic to soft-spring behavior at large vibration amplitudes. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
For dynamic simulations to be credible, verification of the computer code must be an integral part of the modelling process. This two-part paper describes a novel approach to verification through program testing and debugging. In Part 1, a methodology is presented for detecting and isolating coding errors using back-to-back testing. Residuals are generated by comparing the output of two independent implementations, in response to identical inputs. The key feature of the methodology is that a specially modified observer is created using one of the implementations, so as to impose an error-dependent structure on these residuals. Each error can be associated with a fixed and known subspace, permitting errors to be isolated to specific equations in the code. It is shown that the geometric properties extend to multiple errors in either one of the two implementations. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
In Part 1 of this paper a methodology for back-to-back testing of simulation software was described. Residuals with error-dependent geometric properties were generated. A set of potential coding errors was enumerated, along with a corresponding set of feature matrices, which describe the geometric properties imposed on the residuals by each of the errors. In this part of the paper, an algorithm is developed to isolate the coding errors present by analysing the residuals. A set of errors is isolated when the subspace spanned by their combined feature matrices corresponds to that of the residuals. Individual feature matrices are compared to the residuals and classified as 'definite', 'possible' or 'impossible'. The status of 'possible' errors is resolved using a dynamic subset testing algorithm. To demonstrate and validate the testing methodology presented in Part 1 and the isolation algorithm presented in Part 2, a case study is presented using a model for biological wastewater treatment. Both single and simultaneous errors that are deliberately introduced into the simulation code are correctly detected and isolated. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
Throughout the latter months of 2000 and early 2001, the Australian public, media and parliament were engaged in a long and emotive debate about motherhood. This debate constructed the two main protagonists, the unborn 'child' and the potential mother, with a variety of different and often oppositional identities. The article looks at the way that these subject identities interacted during the debate, starting from the premise that policy making has unintended and unacknowledged material outcomes, and using governmentality as a tool through which to analyse and understand processes of identity manipulation and resistance within policy making. The recent debate concerning the right of lesbian and single women to access new reproductive technologies in Australia is used as a case study. Nominally the debate was about access to IVF technology; in reality, however, the debate was about the governing of women and, in particular, the governing of motherhood identities. The article focuses on the parliamentary debate over the drafting of legislation designed to stop lesbian and single women from accessing these technologies, particularly the utilization of the 'unborn' subject within these debates as a device to discipline the identity of 'mother'.
Resumo:
The branching structure of neurones is thought to influence patterns of connectivity and how inputs are integrated within the arbor. Recent studies have revealed a remarkable degree of variation in the branching structure of pyramidal cells in the cerebral cortex of diurnal primates, suggesting regional specialization in neuronal function. Such specialization in pyramidal cell structure may be important for various aspects of visual function, such as object recognition and color processing. To better understand the functional role of regional variation in the pyramidal cell phenotype in visual processing, we determined the complexity of the dendritic branching pattern of pyramidal cells in visual cortex of the nocturnal New World owl monkey. We used the fractal dilation method to quantify the branching structure of pyramidal cells in the primary visual area (V1), the second visual area (V2) and the caudal and rostral subdivisions of inferotemporal cortex (ITc and ITr, respectively), which are often associated with color processing. We found that, as in diurnal monkeys, there was a trend for cells of increasing fractal dimension with progression through these cortical areas. The increasing complexity paralleled a trend for increasing symmetry. That we found a similar trend in both diurnal and nocturnal monkeys suggests that it was a feature of a common anthropoid ancestor.
Resumo:
We analyze the sequences of round-off errors of the orbits of a discretized planar rotation, from a probabilistic angle. It was shown [Bosio & Vivaldi, 2000] that for a dense set of parameters, the discretized map can be embedded into an expanding p-adic dynamical system, which serves as a source of deterministic randomness. For each parameter value, these systems can generate infinitely many distinct pseudo-random sequences over a finite alphabet, whose average period is conjectured to grow exponentially with the bit-length of the initial condition (the seed). We study some properties of these symbolic sequences, deriving a central limit theorem for the deviations between round-off and exact orbits, and obtain bounds concerning repetitions of words. We also explore some asymptotic problems computationally, verifying, among other things, that the occurrence of words of a given length is consistent with that of an abstract Bernoulli sequence.