957 resultados para affect theory
Resumo:
PROBLEM Cost of delivering medium density apartments impedes supply of new and more affordable housing in established suburbs EXISTING FOCUS - Planning controls - Construction costs, esp labour - Regulation eg sustainability
Resumo:
This paper presents a novel algebraic formulation of the central problem of screw theory, namely the determination of the principal screws of a given system. Using the algebra of dual numbers, it shows that the principal screws can be determined via the solution of a generalised eigenproblem of two real, symmetric matrices. This approach allows the study of the principal screws of the general two-, three-systems associated with a manipulator of arbitrary geometry in terms of closed-form expressions of its architecture and configuration parameters. We also present novel methods for the determination of the principal screws for four-, five-systems which do not require the explicit computation of the reciprocal systems. Principal screws of the systems of different orders are identified from one uniform criterion, namely that the pitches of the principal screws are the extreme values of the pitch.The classical results of screw theory, namely the equations for the cylindroid and the pitch-hyperboloid associated with the two-and three-systems, respectively have been derived within the proposed framework. Algebraic conditions have been derived for some of the special screw systems. The formulation is also illustrated with several examples including two spatial manipulators of serial and parallel architecture, respectively.
Resumo:
An alternative derivation of the dispersion relation for the transverse vibration of a circular cylindrical shell is presented. The use of the shallow shell theory model leads to a simpler derivation of the same result. Further, the applicability of the dispersion relation is extended to the axisymmetric mode and the high frequency beam mode.
Resumo:
KIRCHHOFF’S theory [1] and the first-order shear deformation theory (FSDT) [2] of plates in bending are simple theories and continuously used to obtain design information. Within the classical small deformation theory of elasticity, the problem consists of determining three displacements, u, v, and w, that satisfy three equilibrium equations in the interior of the plate and three specified surface conditions. FSDT is a sixth-order theory with a provision to satisfy three edge conditions and maintains, unlike in Kirchhoff’s theory, independent linear thicknesswise distribution of tangential displacement even if the lateral deflection, w, is zero along a supported edge. However, each of the in-plane distributions of the transverse shear stresses that are of a lower order is expressed as a sum of higher-order displacement terms. Kirchhoff’s assumption of zero transverse shear strains is, however, not a limitation of the theory as a first approximation to the exact 3-D solution.
Resumo:
High mechanical stress in atherosclerotic plaques at vulnerable sites, called critical stress, contributes to plaque rupture. The site of minimum fibrous cap (FC) thickness (FCMIN) and plaque shoulder are well-documented vulnerable sites. The inherent weakness of the FC material at the thinnest point increases the stress, making it vulnerable, and it is the big curvature of the lumen contour over FC which may result in increased plaque stress. We aimed to assess critical stresses at FCMIN and the maximum lumen curvature over FC (LCMAX) and quantify the difference to see which vulnerable site had the highest critical stress and was, therefore, at highest risk of rupture. One hundred patients underwent high resolution carotid magnetic resonance (MR) imaging. We used 352 MR slices with delineated atherosclerotic components for the simulation study. Stresses at all the integral nodes along the lumen surface were calculated using the finite-element method. FCMIN and LCMAX were identified, and critical stresses at these sites were assessed and compared. Critical stress at FC MIN was significantly lower than that at LCMAX (median: 121.55 kPa; inter quartile range (IQR) = [60.70-180.32] kPa vs. 150.80 kPa; IQR = [91.39-235.75] kPa, p < 0.0001). If critical stress at FCMIN was only used, then the stress condition of 238 of 352 MR slices would be underestimated, while if the critical stress at LCMAX only was used, then 112 out of 352 would be underestimated. Stress analysis at FCMIN and LCMAX should be used for a refined mechanical risk assessment of atherosclerotic plaques, since material failure at either site may result in rupture.
Resumo:
Computation of the dependency basis is the fundamental step in solving the membership problem for functional dependencies (FDs) and multivalued dependencies (MVDs) in relational database theory. We examine this problem from an algebraic perspective. We introduce the notion of the inference basis of a set M of MVDs and show that it contains the maximum information about the logical consequences of M. We propose the notion of a dependency-lattice and develop an algebraic characterization of inference basis using simple notions from lattice theory. We also establish several interesting properties of dependency-lattices related to the implication problem. Founded on our characterization, we synthesize efficient algorithms for (a): computing the inference basis of a given set M of MVDs; (b): computing the dependency basis of a given attribute set w.r.t. M; and (c): solving the membership problem for MVDs. We also show that our results naturally extend to incorporate FDs also in a way that enables the solution of the membership problem for both FDs and MVDs put together. We finally show that our algorithms are more efficient than existing ones, when used to solve what we term the ‘generalized membership problem’.
Resumo:
Increasing attention has been focused on methods that deliver pharmacologically active compounds (e.g. drugs, peptides and proteins) in a controlled fashion, so that constant, sustained, site-specific or pulsatile action can be attained. Ion-exchange resins have been widely studied in medical and pharmaceutical applications, including controlled drug delivery, leading to commercialisation of some resin based formulations. Ion-exchangers provide an efficient means to adjust and control drug delivery, as the electrostatic interactions enable precise control of the ion-exchange process and, thus, a more uniform and accurate control of drug release compared to systems that are based only on physical interactions. Unlike the resins, only few studies have been reported on ion-exchange fibers in drug delivery. However, the ion-exchange fibers have many advantageous properties compared to the conventional ion-exchange resins, such as more efficient compound loading into and release from the ion-exchanger, easier incorporation of drug-sized compounds, enhanced control of the ion-exchange process, better mechanical, chemical and thermal stability, and good formulation properties, which make the fibers attractive materials for controlled drug delivery systems. In this study, the factors affecting the nature and strength of the binding/loading of drug-sized model compounds into the ion-exchange fibers was evaluated comprehensively and, moreover, the controllability of subsequent drug release/delivery from the fibers was assessed by modifying the conditions of external solutions. Also the feasibility of ion-exchange fibers for simultaneous delivery of two drugs in combination was studied by dual loading. Donnan theory and theoretical modelling were applied to gain mechanistic understanding on these factors. The experimental results imply that incorporation of model compounds into the ion-exchange fibers was attained mainly as a result of ionic bonding, with additional contribution of non-specific interactions. Increasing the ion-exchange capacity of the fiber or decreasing the valence of loaded compounds increased the molar loading, while more efficient release of the compounds was observed consistently at conditions where the valence or concentration of the extracting counter-ion was increased. Donnan theory was capable of fully interpreting the ion-exchange equilibria and the theoretical modelling supported precisely the experimental observations. The physico-chemical characteristics (lipophilicity, hydrogen bonding ability) of the model compounds and the framework of the fibrous ion-exchanger influenced the affinity of the drugs towards the fibers and may, thus, affect both drug loading and release. It was concluded that precisely controlled drug delivery may be tailored for each compound, in particularly, by choosing a suitable ion-exchange fiber and optimizing the delivery system to take into account the external conditions, also when delivering two drugs simultaneously.
Resumo:
A novel method is proposed to treat the problem of the random resistance of a strictly one-dimensional conductor with static disorder. It is suggested, for the probability distribution of the transfer matrix of the conductor, the distribution of maximum information-entropy, constrained by the following physical requirements: 1) flux conservation, 2) time-reversal invariance and 3) scaling, with the length of the conductor, of the two lowest cumulants of ζ, where = sh2ζ. The preliminary results discussed in the text are in qualitative agreement with those obtained by sophisticated microscopic theories.
Resumo:
Timoshenko's shear deformation theory is widely used for the dynamical analysis of shear-flexible beams. This paper presents a comparative study of the shear deformation theory with a higher order model, of which Timoshenko's shear deformation model is a special case. Results indicate that while Timoshenko's shear deformation theory gives reasonably accurate information regarding the set of bending natural frequencies, there are considerable discrepancies in the information it gives regarding the mode shapes and dynamic response, and so there is a need to consider higher order models for the dynamical analysis of flexure of beams.
Resumo:
With the extension of the work of the preceding paper, the relativistic front form for Maxwell's equations for electromagnetism is developed and shown to be particularly suited to the description of paraxial waves. The generators of the Poincaré group in a form applicable directly to the electric and magnetic field vectors are derived. It is shown that the effect of a thin lens on a paraxial electromagnetic wave is given by a six-dimensional transformation matrix, constructed out of certain special generators of the Poincaré group. The method of construction guarantees that the free propagation of such waves as well as their transmission through ideal optical systems can be described in terms of the metaplectic group, exactly as found for scalar waves by Bacry and Cadilhac. An alternative formulation in terms of a vector potential is also constructed. It is chosen in a gauge suggested by the front form and by the requirement that the lens transformation matrix act locally in space. Pencils of light with accompanying polarization are defined for statistical states in terms of the two-point correlation function of the vector potential. Their propagation and transmission through lenses are briefly considered in the paraxial limit. This paper extends Fourier optics and completes it by formulating it for the Maxwell field. We stress that the derivations depend explicitly on the "henochromatic" idealization as well as the identification of the ideal lens with a quadratic phase shift and are heuristic to this extent.
Resumo:
The extension of Hehl's Poincaré gauge theory to more general groups that include space-time diffeomorphisms is worked out for two particular examples, one corresponding to the action of the conformal group on Minkowski space, and the other to the action of the de Sitter group on de Sitter space, and the effect of these groups on physical fields.
Resumo:
Although subsampling is a common method for describing the composition of large and diverse trawl catches, the accuracy of these techniques is often unknown. We determined the sampling errors generated from estimating the percentage of the total number of species recorded in catches, as well as the abundance of each species, at each increase in the proportion of the sorted catch. We completely partitioned twenty prawn trawl catches from tropical northern Australia into subsamples of about 10 kg each. All subsamples were then sorted, and species numbers recorded. Catch weights ranged from 71 to 445 kg, and the number of fish species in trawls ranged from 60 to 138, and invertebrate species from 18 to 63. Almost 70% of the species recorded in catches were "rare" in subsamples (less than one individual per 10 kg subsample or less than one in every 389 individuals). A matrix was used to show the increase in the total number of species that were recorded in each catch as the percentage of the sorted catch increased. Simulation modelling showed that sorting small subsamples (about 10% of catch weights) identified about 50% of the total number of species caught in a trawl. Larger subsamples (50% of catch weight on average) identified about 80% of the total species caught in a trawl. The accuracy of estimating the abundance of each species also increased with increasing subsample size. For the "rare" species, sampling error was around 80% after sorting 10% of catch weight and was just less than 50% after 40% of catch weight had been sorted. For the "abundant" species (five or more individuals per 10 kg subsample or five or more in every 389 individuals), sampling error was around 25% after sorting 10% of catch weight, but was reduced to around 10% after 40% of catch weight had been sorted.
Resumo:
The primary goal of a phase I trial is to find the maximally tolerated dose (MTD) of a treatment. The MTD is usually defined in terms of a tolerable probability, q*, of toxicity. Our objective is to find the highest dose with toxicity risk that does not exceed q*, a criterion that is often desired in designing phase I trials. This criterion differs from that of finding the dose with toxicity risk closest to q*, that is used in methods such as the continual reassessment method. We use the theory of decision processes to find optimal sequential designs that maximize the expected number of patients within the trial allocated to the highest dose with toxicity not exceeding q*, among the doses under consideration. The proposed method is very general in the sense that criteria other than the one considered here can be optimized and that optimal dose assignment can be defined in terms of patients within or outside the trial. It includes as an important special case the continual reassessment method. Numerical study indicates the strategy compares favourably with other phase I designs.
Resumo:
Tutkimuksessa analysoidaan kaaosteorian vaikutusta kaunokirjallisuudessa ja kirjallisuudentutkimuksessa ja esitetään, että kaaosteorian roolia kirjallisuuden kentällä voidaan parhaiten ymmärtää sen avaamien käsitteiden kautta. Suoran soveltamisen sijaan kaaosteorian avulla on käyty uudenlaisia keskusteluja vanhoista aiheista ja luonnontieteestä ammennetut käsitteet ovat johtaneet aiemmin tukkeutuneiden argumenttien avaamiseen uudesta näkökulmasta käsin. Väitöskirjassa keskitytään kolmeen osa-alueeseen: kaunokirjallisen teoksen rakenteen teoretisointiin, ihmisen (erityisesti tekijän) identiteetin hahmottamiseen ja kuvailemiseen sekä fiktion ja todellisuuden suhteen pohdintaan. Tutkimuksen tarkoituksena on osoittaa, kuinka kaaosteorian kautta näitä aiheita on lähestytty niin kirjallisuustieteessä kuin kaunokirjallisissa teoksissakin. Väitöskirjan keskiössä ovat romaanikirjailija John Barthin, dramatisti Tom Stoppardin ja runoilija Jorie Grahamin teosten analyysit. Nämä kirjailijat ammentavat kaaosteoriasta keinoja käsitteellistää rakenteita, jotka ovat yhtä aikaa dynaamisia prosesseja ja hahmotettavia muotoja. Kaunokirjallisina teemoina nousevat esiin myös ihmisen paradoksaalisesti tunnistettava ja aina muuttuva identiteetti sekä lopullista haltuunottoa pakeneva, mutta silti kiehtova ja tavoiteltava todellisuus. Näiden kirjailijoiden teosten analyysin sekä teoreettisen keskustelun kautta väitöskirjassa tuodaan esiin aiemmassa tutkimuksessa varjoon jäänyt, koherenssia, ymmärrettävyyttä ja realismia painottava humanistinen näkökulma kaaosteorian merkityksestä kirjallisuudessa.
Resumo:
This dissertation is a theoretical study of finite-state based grammars used in natural language processing. The study is concerned with certain varieties of finite-state intersection grammars (FSIG) whose parsers define regular relations between surface strings and annotated surface strings. The study focuses on the following three aspects of FSIGs: (i) Computational complexity of grammars under limiting parameters In the study, the computational complexity in practical natural language processing is approached through performance-motivated parameters on structural complexity. Each parameter splits some grammars in the Chomsky hierarchy into an infinite set of subset approximations. When the approximations are regular, they seem to fall into the logarithmic-time hierarchyand the dot-depth hierarchy of star-free regular languages. This theoretical result is important and possibly relevant to grammar induction. (ii) Linguistically applicable structural representations Related to the linguistically applicable representations of syntactic entities, the study contains new bracketing schemes that cope with dependency links, left- and right branching, crossing dependencies and spurious ambiguity. New grammar representations that resemble the Chomsky-Schützenberger representation of context-free languages are presented in the study, and they include, in particular, representations for mildly context-sensitive non-projective dependency grammars whose performance-motivated approximations are linear time parseable. (iii) Compilation and simplification of linguistic constraints Efficient compilation methods for certain regular operations such as generalized restriction are presented. These include an elegant algorithm that has already been adopted as the approach in a proprietary finite-state tool. In addition to the compilation methods, an approach to on-the-fly simplifications of finite-state representations for parse forests is sketched. These findings are tightly coupled with each other under the theme of locality. I argue that the findings help us to develop better, linguistically oriented formalisms for finite-state parsing and to develop more efficient parsers for natural language processing. Avainsanat: syntactic parsing, finite-state automata, dependency grammar, first-order logic, linguistic performance, star-free regular approximations, mildly context-sensitive grammars