76 resultados para NETWORK DESIGN PROBLEMS
Resumo:
Housing in the UK accounts for 30.5% of all energy consumed and is responsible for 25% of all carbon emissions. The UK Government’s Code for Sustainable Homes requires all new homes to be zero carbon by 2016. The development and widespread diffusion of low and zero carbon (LZC) technologies is recognised as being a key solution for housing developers to deliver against this zero-carbon agenda. The innovation challenge to design and incorporate these technologies into housing developers’ standard design and production templates will usher in significant technical and commercial risks. In this paper we report early results from an ongoing Engineering and Physical Sciences Research Council project looking at the innovation logic and trajectory of LZC technologies in new housing. The principal theoretical lens for the research is the socio-technical network approach which considers actors’ interests and interpretative flexibilities of technologies and how they negotiate and reproduce ‘acting spaces’ to shape, in this case, the selection and adoption of LZC technologies. The initial findings are revealing the form and operation of the technology networks around new housing developments as being very complex, involving a range of actors and viewpoints that vary for each housing development.
Resumo:
Purpose – This paper summarises the main research findings from a detailed, qualitative set of structured interviews and case studies of private finance initiative (PFI) schemes in the UK, which involve the construction of built facilities. The research, which was funded by the Foundation for the Built Environment, examines the emergence of PFI in the UK. Benefits and problems in the PFI process are investigated. Best practice, the key critical factors for success, and lessons for the future are also analysed. Design/methodology/approach – The research is based around 11 semi-structured interviews conducted with stakeholders in key PFI projects in the UK. Findings – The research demonstrates that value for money and risk transfer are key success criteria. High procurement and transaction costs are a feature of PFI projects, and the large-scale nature of PFI projects frequently acts as barrier to entry. Research limitations/implications – The research is based on a limited number of in-depth case study interviews. The paper also shows that further research is needed to find better ways to measure these concepts empirically. Practical implications – The paper is important in highlighting four main areas of practical improvement in the PFI process: value for money assessment; establishing end-user needs; developing competitive markets and developing appropriate skills in the public sector. Originality/value – The paper examines the drivers, barriers and critical success factors for PFI in the UK for the first time in detail and will be of value to property investors, financiers, and others involved in the PFI process.
Resumo:
With the introduction of new observing systems based on asynoptic observations, the analysis problem has changed in character. In the near future we may expect that a considerable part of meteorological observations will be unevenly distributed in four dimensions, i.e. three dimensions in space and one in time. The term analysis, or objective analysis in meteorology, means the process of interpolating observed meteorological observations from unevenly distributed locations to a network of regularly spaced grid points. Necessitated by the requirement of numerical weather prediction models to solve the governing finite difference equations on such a grid lattice, the objective analysis is a three-dimensional (or mostly two-dimensional) interpolation technique. As a consequence of the structure of the conventional synoptic network with separated data-sparse and data-dense areas, four-dimensional analysis has in fact been intensively used for many years. Weather services have thus based their analysis not only on synoptic data at the time of the analysis and climatology, but also on the fields predicted from the previous observation hour and valid at the time of the analysis. The inclusion of the time dimension in objective analysis will be called four-dimensional data assimilation. From one point of view it seems possible to apply the conventional technique on the new data sources by simply reducing the time interval in the analysis-forecasting cycle. This could in fact be justified also for the conventional observations. We have a fairly good coverage of surface observations 8 times a day and several upper air stations are making radiosonde and radiowind observations 4 times a day. If we have a 3-hour step in the analysis-forecasting cycle instead of 12 hours, which is applied most often, we may without any difficulties treat all observations as synoptic. No observation would thus be more than 90 minutes off time and the observations even during strong transient motion would fall within a horizontal mesh of 500 km * 500 km.
Resumo:
tWe develop an orthogonal forward selection (OFS) approach to construct radial basis function (RBF)network classifiers for two-class problems. Our approach integrates several concepts in probabilisticmodelling, including cross validation, mutual information and Bayesian hyperparameter fitting. At eachstage of the OFS procedure, one model term is selected by maximising the leave-one-out mutual infor-mation (LOOMI) between the classifier’s predicted class labels and the true class labels. We derive theformula of LOOMI within the OFS framework so that the LOOMI can be evaluated efficiently for modelterm selection. Furthermore, a Bayesian procedure of hyperparameter fitting is also integrated into theeach stage of the OFS to infer the l2-norm based local regularisation parameter from the data. Since eachforward stage is effectively fitting of a one-variable model, this task is very fast. The classifier construc-tion procedure is automatically terminated without the need of using additional stopping criterion toyield very sparse RBF classifiers with excellent classification generalisation performance, which is par-ticular useful for the noisy data sets with highly overlapping class distribution. A number of benchmarkexamples are employed to demonstrate the effectiveness of our proposed approach.
Resumo:
In order to overcome divergence of estimation with the same data, the proposed digital costing process adopts an integrated design of information system to design the process knowledge and costing system together. By employing and extending a widely used international standard, industry foundation classes, the system can provide an integrated process which can harvest information and knowledge of current quantity surveying practice of costing method and data. Knowledge of quantification is encoded from literatures, motivation case and standards. It can reduce the time consumption of current manual practice. The further development will represent the pricing process in a Bayesian Network based knowledge representation approach. The hybrid types of knowledge representation can produce a reliable estimation for construction project. In a practical term, the knowledge management of quantity surveying can improve the system of construction estimation. The theoretical significance of this study lies in the fact that its content and conclusion make it possible to develop an automatic estimation system based on hybrid knowledge representation approach.
Resumo:
Building Information Modeling (BIM) is the process of structuring, capturing, creating, and managing a digital representation of physical and/or functional characteristics of a built space [1]. Current BIM has limited ability to represent dynamic semantics, social information, often failing to consider building activity, behavior and context; thus limiting integration with intelligent, built-environment management systems. Research, such as the development of Semantic Exchange Modules, and/or the linking of IFC with semantic web structures, demonstrates the need for building models to better support complex semantic functionality. To implement model semantics effectively, however, it is critical that model designers consider semantic information constructs. This paper discusses semantic models with relation to determining the most suitable information structure. We demonstrate how semantic rigidity can lead to significant long-term problems that can contribute to model failure. A sufficiently detailed feasibility study is advised to maximize the value from the semantic model. In addition we propose a set of questions, to be used during a model’s feasibility study, and guidelines to help assess the most suitable method for managing semantics in a built environment.
Resumo:
Purpose – The purpose of this paper is to examine the reasons for the lack of research attention paid to the Middle East (ME) and Africa regions. In particular, this study seeks to identify the reasons for and implications of the paucity of ME- and Africa-based studies in high-quality international journals in the marketing field with a specific focus on the challenges in conducting and publishing research on these regions. Design/methodology/approach – The authors conducted a systematic review of the literature on the ME and Africa regions to identify papers published in 23 high-quality marketing, international business, and advertising journals. This search resulted in 301 articles, among which 125 articles were based on primary or secondary data collected from a local source in those regions. The authors of these 125 articles constitute the Delphi study sample. These academics provided input in an effort to reach a consensus regarding the two proposed models of academic research in both regions. Findings – This paper differs from previous studies, where academic freedom emerged as the most important inhibitor to conducting and publishing research. The most frequently mentioned challenges in conducting research in Africa were access to data, data collection issues, diversity of the region, and lack of research support infrastructure. For the ME, the most often described challenges included validity and reliability of data, language barriers, data collection issues, and availability of a network of researchers. Editors’ and reviewers’ low interest and limited knowledge were ranked high in both regions. South Africa, Israel, and Turkey emerged as outliers, in which research barriers were less challenging than in the rest of the two regions. The authors attribute this difference to the high incidence of US-trained or US-based scholars originating from these countries. Originality/value – To the best of the knowledge, no marketing studies have discussed the problems of publishing in high-quality international journals of marketing, international business, and advertising for either region. Thus, most of the issues the authors discuss in this paper offer new insightful results while supplementing previous research on the challenges of conducting and publishing research on specific world regions.
Resumo:
The Finnish Meteorological Institute, in collaboration with the University of Helsinki, has established a new ground-based remote-sensing network in Finland. The network consists of five topographically, ecologically and climatically different sites distributed from southern to northern Finland. The main goal of the network is to monitor air pollution and boundary layer properties in near real time, with a Doppler lidar and ceilometer at each site. In addition to these operational tasks, two sites are members of the Aerosols, Clouds and Trace gases Research InfraStructure Network (ACTRIS); a Ka band cloud radar at Sodankylä will provide cloud retrievals within CloudNet, and a multi-wavelength Raman lidar, PollyXT (POrtabLe Lidar sYstem eXTended), in Kuopio provides optical and microphysical aerosol properties through EARLINET (the European Aerosol Research Lidar Network). Three C-band weather radars are located in the Helsinki metropolitan area and are deployed for operational and research applications. We performed two inter-comparison campaigns to investigate the Doppler lidar performance, compare the backscatter signal and wind profiles, and to optimize the lidar sensitivity through adjusting the telescope focus length and data-integration time to ensure sufficient signal-to-noise ratio (SNR) in low-aerosol-content environments. In terms of statistical characterization, the wind-profile comparison showed good agreement between different lidars. Initially, there was a discrepancy in the SNR and attenuated backscatter coefficient profiles which arose from an incorrectly reported telescope focus setting from one instrument, together with the need to calibrate. After diagnosing the true telescope focus length, calculating a new attenuated backscatter coefficient profile with the new telescope function and taking into account calibration, the resulting attenuated backscatter profiles all showed good agreement with each other. It was thought that harsh Finnish winters could pose problems, but, due to the built-in heating systems, low ambient temperatures had no, or only a minor, impact on the lidar operation – including scanning-head motion. However, accumulation of snow and ice on the lens has been observed, which can lead to the formation of a water/ice layer thus attenuating the signal inconsistently. Thus, care must be taken to ensure continuous snow removal.
Resumo:
With the emerging prevalence of smart phones and 4G LTE networks, the demand for faster-better-cheaper mobile services anytime and anywhere is ever growing. The Dynamic Network Optimization (DNO) concept emerged as a solution that optimally and continuously tunes the network settings, in response to varying network conditions and subscriber needs. Yet, the DNO realization is still at infancy, largely hindered by the bottleneck of the lengthy optimization runtime. This paper presents the design and prototype of a novel cloud based parallel solution that further enhances the scalability of our prior work on various parallel solutions that accelerate network optimization algorithms. The solution aims to satisfy the high performance required by DNO, preliminarily on a sub-hourly basis. The paper subsequently visualizes a design and a full cycle of a DNO system. A set of potential solutions to large network and real-time DNO are also proposed. Overall, this work creates a breakthrough towards the realization of DNO.
Resumo:
It has been years since the introduction of the Dynamic Network Optimization (DNO) concept, yet the DNO development is still at its infant stage, largely due to a lack of breakthrough in minimizing the lengthy optimization runtime. Our previous work, a distributed parallel solution, has achieved a significant speed gain. To cater for the increased optimization complexity pressed by the uptake of smartphones and tablets, however, this paper examines the potential areas for further improvement and presents a novel asynchronous distributed parallel design that minimizes the inter-process communications. The new approach is implemented and applied to real-life projects whose results demonstrate an augmented acceleration of 7.5 times on a 16-core distributed system compared to 6.1 of our previous solution. Moreover, there is no degradation in the optimization outcome. This is a solid sprint towards the realization of DNO.
Resumo:
Atmospheric pollution over South Asia attracts special attention due to its effects on regional climate, water cycle and human health. These effects are potentially growing owing to rising trends of anthropogenic aerosol emissions. In this study, the spatio-temporal aerosol distributions over South Asia from seven global aerosol models are evaluated against aerosol retrievals from NASA satellite sensors and ground-based measurements for the period of 2000–2007. Overall, substantial underestimations of aerosol loading over South Asia are found systematically in most model simulations. Averaged over the entire South Asia, the annual mean aerosol optical depth (AOD) is underestimated by a range 15 to 44% across models compared to MISR (Multi-angle Imaging SpectroRadiometer), which is the lowest bound among various satellite AOD retrievals (from MISR, SeaWiFS (Sea-Viewing Wide Field-of-View Sensor), MODIS (Moderate Resolution Imaging Spectroradiometer) Aqua and Terra). In particular during the post-monsoon and wintertime periods (i.e., October–January), when agricultural waste burning and anthropogenic emissions dominate, models fail to capture AOD and aerosol absorption optical depth (AAOD) over the Indo–Gangetic Plain (IGP) compared to ground-based Aerosol Robotic Network (AERONET) sunphotometer measurements. The underestimations of aerosol loading in models generally occur in the lower troposphere (below 2 km) based on the comparisons of aerosol extinction profiles calculated by the models with those from Cloud–Aerosol Lidar with Orthogonal Polarization (CALIOP) data. Furthermore, surface concentrations of all aerosol components (sulfate, nitrate, organic aerosol (OA) and black carbon (BC)) from the models are found much lower than in situ measurements in winter. Several possible causes for these common problems of underestimating aerosols in models during the post-monsoon and wintertime periods are identified: the aerosol hygroscopic growth and formation of secondary inorganic aerosol are suppressed in the models because relative humidity (RH) is biased far too low in the boundary layer and thus foggy conditions are poorly represented in current models, the nitrate aerosol is either missing or inadequately accounted for, and emissions from agricultural waste burning and biofuel usage are too low in the emission inventories. These common problems and possible causes found in multiple models point out directions for future model improvements in this important region.
Resumo:
This text extends some ideas presented in a keynote lecture of the 5th Encontro de Tipografia conference, in Barcelos, Portugal, in November 2014. The paper discusses problems of identifying the location and encoding of design decisions, the implications of digital workflows for capturing knowledge generating through design practice, and the consequences of the transformation of production tools into commodities. It concludes with a discussion of the perception of added value in typeface design.
Resumo:
Background Epidemiological studies indicate that the prevalence of psychological problems in patients attending primary care services may be as high as 25%. Aim To identify factors that influence the detection of psychological difficulties in adolescent patients receiving primary care in the UK. Design of study A prospective study of 13-16 year olds consecutively attending general practices. Setting General practices, Norfolk, UK. Method Information was obtained from adolescents and parents using the validated Strengths and Difficulties Questionnaire (SDQ) and from GF`s using the consultation assessment form. Results Ninety-eight adolescents were recruited by 13 GPs in Norfolk (mean age = 14.4 years, SD = 1.08; 38 males, 60 females). The study identified psychological difficulties in almost one-third of adolescents (31/98, 31.6%). Three factors significant to the detection of psychological disorders in adolescents were identified: adolescents' perceptions of difficulties according to the self-report SDQ, the severity of their problems as indicated by the self-report SDQ, and whether psychological issues were discussed in the consultation. GPs did not always explore psychological problems with adolescents, even if GPs perceived these to be present. Nineteen of 31 adolescents with psychological difficulties were identified by GPs (sensitivity = 61.2%, specificity = 85.1%). A management plan or follow-up was made for only seven of 19 adolescents identified, suggesting that ongoing psychological difficulties in many patients are not being addressed. Conclusions GPs are in a good position to identify psychological issues in adolescents, but GPs and adolescents seem reluctant to explore these openly. Open discussion of psychological issues in GP consultations was found to be the most important factor in determining whether psychological difficulties in adolescents are detected by GPs.
Resumo:
In vitro fermentation techniques (IVFT) have been widely used to evaluate the nutritivevalue of feeds for ruminants and in the last decade to assess the effect of different nutritionalstrategies on methane (CH4) production. However, many technical factors may influencethe results obtained. The present review has been prepared by the ‘Global Network’ FACCE-JPI international research consortium to provide a critical evaluation of the main factorsthat need to be considered when designing, conducting and interpreting IVFT experimentsthat investigate nutritional strategies to mitigate CH4emission from ruminants. Given theincreasing and wide-scale use of IVFT, there is a need to critically review reports in the lit-erature and establish what criteria are essential to the establishment and implementationof in vitro techniques. Key aspects considered include: i) donor animal species and numberof animal used, ii) diet fed to donor animals, iii) collection and processing of rumen fluidas inoculum, iv) choice of substrate and incubation buffer, v) incubation procedures andCH4measurements, vi) headspace gas composition and vii) comparability of in vitro andin vivo measurements. Based on an evaluation of experimental evidence, a set of techni-cal recommendations are presented to harmonize IVFT for feed evaluation, assessment ofrumen function and CH4production.
Resumo:
Background Major Depressive Disorder (MDD) is among the most prevalent and disabling medical conditions worldwide. Identification of clinical and biological markers (“biomarkers”) of treatment response could personalize clinical decisions and lead to better outcomes. This paper describes the aims, design, and methods of a discovery study of biomarkers in antidepressant treatment response, conducted by the Canadian Biomarker Integration Network in Depression (CAN-BIND). The CAN-BIND research program investigates and identifies biomarkers that help to predict outcomes in patients with MDD treated with antidepressant medication. The primary objective of this initial study (known as CAN-BIND-1) is to identify individual and integrated neuroimaging, electrophysiological, molecular, and clinical predictors of response to sequential antidepressant monotherapy and adjunctive therapy in MDD. Methods CAN-BIND-1 is a multisite initiative involving 6 academic health centres working collaboratively with other universities and research centres. In the 16-week protocol, patients with MDD are treated with a first-line antidepressant (escitalopram 10–20 mg/d) that, if clinically warranted after eight weeks, is augmented with an evidence-based, add-on medication (aripiprazole 2–10 mg/d). Comprehensive datasets are obtained using clinical rating scales; behavioural, dimensional, and functioning/quality of life measures; neurocognitive testing; genomic, genetic, and proteomic profiling from blood samples; combined structural and functional magnetic resonance imaging; and electroencephalography. De-identified data from all sites are aggregated within a secure neuroinformatics platform for data integration, management, storage, and analyses. Statistical analyses will include multivariate and machine-learning techniques to identify predictors, moderators, and mediators of treatment response. Discussion From June 2013 to February 2015, a cohort of 134 participants (85 outpatients with MDD and 49 healthy participants) has been evaluated at baseline. The clinical characteristics of this cohort are similar to other studies of MDD. Recruitment at all sites is ongoing to a target sample of 290 participants. CAN-BIND will identify biomarkers of treatment response in MDD through extensive clinical, molecular, and imaging assessments, in order to improve treatment practice and clinical outcomes. It will also create an innovative, robust platform and database for future research.