852 resultados para Numerical approximation and analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is a growing need for parametric design software that communicates building performance feedback in early architectural exploration to support decision-making. This paper examines how the circuit of design and analysis process can be closed to provide active and concurrent feedback between architecture and services engineering domains. It presents the structure for an openly customisable design system that couples parametric modelling and energy analysis software to allow designers to assess the performance of early design iterations quickly. Finally, it discusses how user interactions with the system foster information exchanges that facilitate the sharing of design intelligence across disciplines.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Complex networks have been studied extensively due to their relevance to many real-world systems such as the world-wide web, the internet, biological and social systems. During the past two decades, studies of such networks in different fields have produced many significant results concerning their structures, topological properties, and dynamics. Three well-known properties of complex networks are scale-free degree distribution, small-world effect and self-similarity. The search for additional meaningful properties and the relationships among these properties is an active area of current research. This thesis investigates a newer aspect of complex networks, namely their multifractality, which is an extension of the concept of selfsimilarity. The first part of the thesis aims to confirm that the study of properties of complex networks can be expanded to a wider field including more complex weighted networks. Those real networks that have been shown to possess the self-similarity property in the existing literature are all unweighted networks. We use the proteinprotein interaction (PPI) networks as a key example to show that their weighted networks inherit the self-similarity from the original unweighted networks. Firstly, we confirm that the random sequential box-covering algorithm is an effective tool to compute the fractal dimension of complex networks. This is demonstrated on the Homo sapiens and E. coli PPI networks as well as their skeletons. Our results verify that the fractal dimension of the skeleton is smaller than that of the original network due to the shortest distance between nodes is larger in the skeleton, hence for a fixed box-size more boxes will be needed to cover the skeleton. Then we adopt the iterative scoring method to generate weighted PPI networks of five species, namely Homo sapiens, E. coli, yeast, C. elegans and Arabidopsis Thaliana. By using the random sequential box-covering algorithm, we calculate the fractal dimensions for both the original unweighted PPI networks and the generated weighted networks. The results show that self-similarity is still present in generated weighted PPI networks. This implication will be useful for our treatment of the networks in the third part of the thesis. The second part of the thesis aims to explore the multifractal behavior of different complex networks. Fractals such as the Cantor set, the Koch curve and the Sierspinski gasket are homogeneous since these fractals consist of a geometrical figure which repeats on an ever-reduced scale. Fractal analysis is a useful method for their study. However, real-world fractals are not homogeneous; there is rarely an identical motif repeated on all scales. Their singularity may vary on different subsets; implying that these objects are multifractal. Multifractal analysis is a useful way to systematically characterize the spatial heterogeneity of both theoretical and experimental fractal patterns. However, the tools for multifractal analysis of objects in Euclidean space are not suitable for complex networks. In this thesis, we propose a new box covering algorithm for multifractal analysis of complex networks. This algorithm is demonstrated in the computation of the generalized fractal dimensions of some theoretical networks, namely scale-free networks, small-world networks, random networks, and a kind of real networks, namely PPI networks of different species. Our main finding is the existence of multifractality in scale-free networks and PPI networks, while the multifractal behaviour is not confirmed for small-world networks and random networks. As another application, we generate gene interactions networks for patients and healthy people using the correlation coefficients between microarrays of different genes. Our results confirm the existence of multifractality in gene interactions networks. This multifractal analysis then provides a potentially useful tool for gene clustering and identification. The third part of the thesis aims to investigate the topological properties of networks constructed from time series. Characterizing complicated dynamics from time series is a fundamental problem of continuing interest in a wide variety of fields. Recent works indicate that complex network theory can be a powerful tool to analyse time series. Many existing methods for transforming time series into complex networks share a common feature: they define the connectivity of a complex network by the mutual proximity of different parts (e.g., individual states, state vectors, or cycles) of a single trajectory. In this thesis, we propose a new method to construct networks of time series: we define nodes by vectors of a certain length in the time series, and weight of edges between any two nodes by the Euclidean distance between the corresponding two vectors. We apply this method to build networks for fractional Brownian motions, whose long-range dependence is characterised by their Hurst exponent. We verify the validity of this method by showing that time series with stronger correlation, hence larger Hurst exponent, tend to have smaller fractal dimension, hence smoother sample paths. We then construct networks via the technique of horizontal visibility graph (HVG), which has been widely used recently. We confirm a known linear relationship between the Hurst exponent of fractional Brownian motion and the fractal dimension of the corresponding HVG network. In the first application, we apply our newly developed box-covering algorithm to calculate the generalized fractal dimensions of the HVG networks of fractional Brownian motions as well as those for binomial cascades and five bacterial genomes. The results confirm the monoscaling of fractional Brownian motion and the multifractality of the rest. As an additional application, we discuss the resilience of networks constructed from time series via two different approaches: visibility graph and horizontal visibility graph. Our finding is that the degree distribution of VG networks of fractional Brownian motions is scale-free (i.e., having a power law) meaning that one needs to destroy a large percentage of nodes before the network collapses into isolated parts; while for HVG networks of fractional Brownian motions, the degree distribution has exponential tails, implying that HVG networks would not survive the same kind of attack.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The LiteSteel Beam (LSB) is a new hollow flange channel section developed using a patented dual electric resistance welding and cold-forming process. It has a unique geometry consisting of torsionally rigid rectangular hollow flanges and a slender web, and is commonly used as flexural members. However, the LSB flexural members are subjected to a relatively new lateral distortional buckling mode, which reduces their moment capacities. Unlike lateral torsional buckling, the lateral distortional buckling of LSBs is characterised by simultaneous lateral deflection, twist and cross sectional change due to web distortion. Therefore a detailed investigation into the lateral buckling behaviour of LSB flexural members was undertaken using experiments and finite element analyses. This paper presents the details of suitable finite element models developed to simulate the behaviour and capacity of LSB flexural members subject to lateral buckling. The models included all significant effects that influence the ultimate moment capacities of such members, including material inelasticity, lateral distortional buckling deformations, web distortion, residual stresses, and geometric imperfections. Comparison of elastic buckling and ultimate moment capacity results with predictions from other numerical analyses and available buckling moment equations, and experimental results showed that the developed finite element models accurately predict the behaviour and moment capacities of LSBs. The validated model was then used in a detailed parametric study that produced accurate moment capacity data for all the LSB sections and improved design rules for LSB flexural members subject to lateral distortional buckling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a three-dimensional nonlinear rigid body model has been developed for the investigation of the crashworthiness of a passenger train using the multibody dynamics approach. This model refers to a typical design of passenger cars and train constructs commonly used in Australia. The high-energy and low-energy crush zones of the cars and the train constructs are assumed and the data are explicitly provided in the paper. The crash scenario is limited to the train colliding on to a fixed barrier symmetrically. The simulations of a single car show that this initial design is only applicable for the crash speed of 35 km/h or lower. For higher speeds (e.g. 140 km/h), the crush lengths or crush forces or both the crush zone elements will have to be enlarged. It is generally better to increase the crush length than the crush force in order to retain the low levels of the longitudinal deceleration of the passenger cars.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Maintenance Test Section Survey (MTSS) was conducted as part of a Peer State Review of the Texas Maintenance Program conducted October 5–7, 2010. The purpose of the MTSS was to conduct a field review of 34 highway test sections and obtain participants’ opinions about pavement, roadside, and maintenance conditions. The goal was to cross reference or benchmark TxDOT’s maintenance practices based on practices used by selected peer states. Representatives from six peer states (California, Georgia, Kansas, Missouri, North Carolina, and Washington) were invited to Austin to attend a 3-day Peer State Review of TxDOT Maintenance Practices Workshop and to participate in a field survey of a number of pre-selected one-mile roadway sections. It should be emphasized that the objective of the survey was not to evaluate and grade or score TxDOT’s road network but rather to determine whether the selected roadway sections met acceptable standards of service as perceived by Directors of Maintenance or senior maintenance managers from the peer states...

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Knowledge of cable parameters has been well established but a better knowledge of the environment in which the cables are buried lags behind. Research in Queensland University of Technology has been aimed at obtaining and analysing actual daily field values of thermal resistivity and diffusivity of the soil around power cables. On-line monitoring systems have been developed and installed with a data logger system and buried spheres that use an improved technique to measure thermal resistivity and diffusivity over a short period. Results based on long term continuous field data are given. A probabilistic approach is developed to establish the correlation between the measured field thermal resistivity values and rainfall data from weather bureau records. This data from field studies can reduce the risk in cable rating decisions and provide a basis for reliable prediction of “hot spot” of an existing cable circuit

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fire safety has become an important part in structural design due to the ever increasing loss of properties and lives during fires. Fire rating of load bearing wall systems made of Light gauge Steel Frames (LSF) is determined using fire tests based on the standard time-temperature curve given in ISO 834. However, modern residential buildings make use of thermoplastic materials, which mean considerably high fuel loads. Hence a detailed fire research study into the performance of load bearing LSF walls was undertaken using a series of realistic design fire curves developed based on Eurocode parametric curves and Barnett’s BFD curves. It included both full scale fire tests and numerical studies of LSF walls without any insulation, and the recently developed externally insulated composite panels. This paper presents the details of fire tests first, and then the numerical models of tested LSF wall studs. It shows that suitable finite element models can be developed to predict the fire rating of load bearing walls under real fire conditions. The paper also describes the structural and fire performances of externally insulated LSF walls in comparison to the non-insulated walls under real fires, and highlights the effects of standard and real fire curves on fire performance of LSF walls.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work investigates the accuracy and efficiency tradeoffs between centralized and collective (distributed) algorithms for (i) sampling, and (ii) n-way data analysis techniques in multidimensional stream data, such as Internet chatroom communications. Its contributions are threefold. First, we use the Kolmogorov-Smirnov goodness-of-fit test to show that statistical differences between real data obtained by collective sampling in time dimension from multiple servers and that of obtained from a single server are insignificant. Second, we show using the real data that collective data analysis of 3-way data arrays (users x keywords x time) known as high order tensors is more efficient than centralized algorithms with respect to both space and computational cost. Furthermore, we show that this gain is obtained without loss of accuracy. Third, we examine the sensitivity of collective constructions and analysis of high order data tensors to the choice of server selection and sampling window size. We construct 4-way tensors (users x keywords x time x servers) and analyze them to show the impact of server and window size selections on the results.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Secure communications in distributed Wireless Sensor Networks (WSN) operating under adversarial conditions necessitate efficient key management schemes. In the absence of a priori knowledge of post-deployment network configuration and due to limited resources at sensor nodes, key management schemes cannot be based on post-deployment computations. Instead, a list of keys, called a key-chain, is distributed to each sensor node before the deployment. For secure communication, either two nodes should have a key in common in their key-chains, or they should establish a key through a secure-path on which every link is secured with a key. We first provide a comparative survey of well known key management solutions for WSN. Probabilistic, deterministic and hybrid key management solutions are presented, and they are compared based on their security properties and re-source usage. We provide a taxonomy of solutions, and identify trade-offs in them to conclude that there is no one size-fits-all solution. Second, we design and analyze deterministic and hybrid techniques to distribute pair-wise keys to sensor nodes before the deployment. We present novel deterministic and hybrid approaches based on combinatorial design theory and graph theory for deciding how many and which keys to assign to each key-chain before the sensor network deployment. Performance and security of the proposed schemes are studied both analytically and computationally. Third, we address the key establishment problem in WSN which requires key agreement algorithms without authentication are executed over a secure-path. The length of the secure-path impacts the power consumption and the initialization delay for a WSN before it becomes operational. We formulate the key establishment problem as a constrained bi-objective optimization problem, break it into two sub-problems, and show that they are both NP-Hard and MAX-SNP-Hard. Having established inapproximability results, we focus on addressing the authentication problem that prevents key agreement algorithms to be used directly over a wireless link. We present a fully distributed algorithm where each pair of nodes can establish a key with authentication by using their neighbors as the witnesses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aerial Vehicles (UAV) has become a significant growing segment of the global aviation industry. These vehicles are developed with the intention of operating in regions where the presence of onboard human pilots is either too risky or unnecessary. Their popularity with both the military and civilian sectors have seen the use of UAVs in a diverse range of applications, from reconnaissance and surveillance tasks for the military, to civilian uses such as aid relief and monitoring tasks. Efficient energy utilisation on an UAV is essential to its functioning, often to achieve the operational goals of range, endurance and other specific mission requirements. Due to the limitations of the space available and the mass budget on the UAV, it is often a delicate balance between the onboard energy available (i.e. fuel) and achieving the operational goals. This paper presents the development of a parallel Hybrid Electric Propulsion System (HEPS) on a small fixed-wing UAV incorporating an Ideal Operating Line (IOL) control strategy. A simulation model of an UAV was developed in the MATLAB Simulink environment, utilising the AeroSim Blockset and the in-built Aerosonde UAV block and its parameters. An IOL analysis of an Aerosonde engine was performed, and the most efficient (i.e. provides greatest torque output at the least fuel consumption) points of operation for this engine were determined. Simulation models of the components in a HEPS were designed and constructed in the MATLAB Simulink environment. It was demonstrated through simulation that an UAV with the current HEPS configuration was capable of achieving a fuel saving of 6.5%, compared to the ICE-only configuration. These components form the basis for the development of a complete simulation model of a Hybrid-Electric UAV (HEUAV).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Deterministic computer simulations of physical experiments are now common techniques in science and engineering. Often, physical experiments are too time consuming, expensive or impossible to conduct. Complex computer models or codes, rather than physical experiments lead to the study of computer experiments, which are used to investigate many scientific phenomena of this nature. A computer experiment consists of a number of runs of the computer code with different input choices. The Design and Analysis of Computer Experiments is a rapidly growing technique in statistical experimental design. This thesis investigates some practical issues in the design and analysis of computer experiments and attempts to answer some of the questions faced by experimenters using computer experiments. In particular, the question of the number of computer experiments and how they should be augmented is studied and attention is given to when the response is a function over time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The latest paradigm shift in government, termed Transformational Government, puts the citizen in the centre of attention. Including citizens in the design of online one-stop portals can help governmental organisations to become more customer focussed. This study describes the initial efforts of an Australian state government to develop an information architecture to structure the content of their future one-stop portal. Hereby, card sorting exercises have been conducted and analysed, utilising contemporary approaches found in academic and non-scientific literature. This paper describes the findings of the card sorting exercises in this particular case and discusses the suitability of the applied approaches in general. These are distinguished into non-statistical, statistical, and hybrid approaches. Thus, on the one hand, this paper contributes to academia by describing the application of different card sorting approaches and discussing their strengths and weaknesses. On the other hand, this paper contributes to practice by explaining the approach that has been taken by the authors’ research partner in order to develop a customer-focussed governmental one-stop portal. Thus, they provide decision support for practitioners with regard to different analysis methods that can be used to complement recent approaches in Transformational Government.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

High power piezoelectric ultrasonic transducers have been widely exploited in a variety of applications. The critical behaviour of a piezoelectric device is encapsulated in its resonant frequencies because of its maximum transmission performance at these frequencies. Therefore power electronic converters should be tuned at those resonant frequencies to transfer electrical power to mechanical power efficiently. However, structural and environmental changes cause variations in the device resonant frequencies which can degrade the system performance. Hence, estimating the device resonant frequencies within the incorporated setup can significantly improve the system performance. This paper proposes an efficient resonant frequency estimation approach to maintain the performance of high power ultrasonic applications using the employed power converter. Experimental validations indicate the effectiveness of the proposed method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cell line array (CMA) and tissue microarray (TMA) technologies are high-throughput methods for analysing both the abundance and distribution of gene expression in a panel of cell lines or multiple tissue specimens in an efficient and cost-effective manner. The process is based on Kononen's method of extracting a cylindrical core of paraffin-embedded donor tissue and inserting it into a recipient paraffin block. Donor tissue from surgically resected paraffin-embedded tissue blocks, frozen needle biopsies or cell line pellets can all be arrayed in the recipient block. The representative area of interest is identified and circled on a haematoxylin and eosin (H&E)-stained section of the donor block. Using a predesigned map showing a precise spacing pattern, a high density array of up to 1,000 cores of cell pellets and/or donor tissue can be embedded into the recipient block using a tissue arrayer from Beecher Instruments. Depending on the depth of the cell line/tissue removed from the donor block 100-300 consecutive sections can be cut from each CMA/TMA block. Sections can be stained for in situ detection of protein, DNA or RNA targets using immunohistochemistry (IHC), fluorescent in situ hybridisation (FISH) or mRNA in situ hybridisation (RNA-ISH), respectively. This chapter provides detailed methods for CMA/TMA design, construction and analysis with in-depth notes on all technical aspects including tips to deal with common pitfalls the user may encounter. © Springer Science+Business Media, LLC 2011.