22 resultados para Weighted by Sum Assured
em Aston University Research Archive
Resumo:
This thesis studied the effect of (i) the number of grating components and (ii) parameter randomisation on root-mean-square (r.m.s.) contrast sensitivity and spatial integration. The effectiveness of spatial integration without external spatial noise depended on the number of equally spaced orientation components in the sum of gratings. The critical area marking the saturation of spatial integration was found to decrease when the number of components increased from 1 to 5-6 but increased again at 8-16 components. The critical area behaved similarly as a function of the number of grating components when stimuli consisted of 3, 6 or 16 components with different orientations and/or phases embedded in spatial noise. Spatial integration seemed to depend on the global Fourier structure of the stimulus. Spatial integration was similar for sums of two vertical cosine or sine gratings with various Michelson contrasts in noise. The critical area for a grating sum was found to be a sum of logarithmic critical areas for the component gratings weighted by their relative Michelson contrasts. The human visual system was modelled as a simple image processor where the visual stimuli is first low-pass filtered by the optical modulation transfer function of the human eye and secondly high-pass filtered, up to the spatial cut-off frequency determined by the lowest neural sampling density, by the neural modulation transfer function of the visual pathways. The internal noise is then added before signal interpretation occurs in the brain. The detection is mediated by a local spatially windowed matched filter. The model was extended to include complex stimuli and its applicability to the data was found to be successful. The shape of spatial integration function was similar for non-randomised and randomised simple and complex gratings. However, orientation and/or phase randomised reduced r.m.s contrast sensitivity by a factor of 2. The effect of parameter randomisation on spatial integration was modelled under the assumption that human observers change the observer strategy from cross-correlation (i.e., a matched filter) to auto-correlation detection when uncertainty is introduced to the task. The model described the data accurately.
Resumo:
Data envelopment analysis defines the relative efficiency of a decision making unit (DMU) as the ratio of the sum of its weighted outputs to the sum of its weighted inputs allowing the DMUs to freely allocate weights to their inputs/outputs. However, this measure may not reflect a DMU's true efficiency as some inputs/outputs may not contribute reasonably to the efficiency measure. Traditionally, to overcome this problem weights restrictions have been imposed. This paper offers a new approach to this problem where DMUs operate a constant returns to scale technology in a single input multi-output context. The approach is based on introducing unobserved DMUs, created by adjusting the output levels of certain observed relatively efficient DMUs, reflecting a combination of technical information of feasible production levels and the DM's value judgments. Its main advantage is that the information conveyed by the DM is local, with reference to a specific observed DMU. The approach is illustrated on a real life application. © 2003 Elsevier B.V. All rights reserved.
Resumo:
This paper presents a simple profitability-based decision model to show how synergistic gains generated by the joint adoption of complementary innovations may influence the firm's adoption decision. For this purpose a weighted index of intra-firm diffusion is built to investigate empirically the drivers of the intensity of joint use of a set of complementary innovations. The findings indicate that establishment size, ownership structure and product market concentration are important determinants of the intensity of use. Interestingly, the factors that affect the extent of use of technological innovations do also affect that of clusters of management practices. However, they can explain only part of the heterogeneity of the benefits from joint use.
Resumo:
We assessed summation of contrast across eyes and area at detection threshold ( C t). Stimuli were sine-wave gratings (2.5 c/deg) spatially modulated by cosine- and anticosine-phase raised plaids (0.5 c/deg components oriented at ±45°). When presented dichoptically the signal regions were interdigitated across eyes but produced a smooth continuous grating following their linear binocular sum. The average summation ratio ( C t1/([ C t1+2]) for this stimulus pair was 1.64 (4.3 dB). This was only slightly less than the binocular summation found for the same patch type presented to both eyes, and the area summation found for the two different patch types presented to the same eye. We considered 192 model architectures containing each of the following four elements in all possible orders: (i) linear summation or a MAX operator across eyes, (ii) linear summation or a MAX operator across area, (iii) linear or accelerating contrast transduction, and (iv) additive Gaussian, stochastic noise. Formal equivalences reduced this to 62 different models. The most successful four-element model was: linear summation across eyes followed by nonlinear contrast transduction, linear summation across area, and late noise. Model performance was enhanced when additional nonlinearities were placed before binocular summation and after area summation. The implications for models of probability summation and uncertainty are discussed.
Resumo:
Previous contrast discrimination experiments have shown that luminance contrast is summed across ocular (T. S. Meese, M. A. Georgeson, & D. H. Baker, 2006) and spatial (T. S. Meese & R. J. Summers, 2007) dimensions at threshold and above. However, is this process sufficiently general to operate across the conjunction of eyes and space? Here we used a "Swiss cheese" stimulus where the blurred "holes" in sine-wave carriers were of equal area to the blurred target ("cheese") regions. The locations of the target regions in the monocular image pairs were interdigitated across eyes such that their binocular sum was a uniform grating. When pedestal contrasts were above threshold, the monocular neural images contained strong evidence that the high-contrast regions in the two eyes did not overlap. Nevertheless, sensitivity to dual contrast increments (i.e., to contrast increments in different locations in the two eyes) was a factor of ∼1.7 greater than to single increments (i.e., increments in a single eye), comparable with conventional binocular summation. This provides evidence for a contiguous area summation process that operates at all contrasts and is influenced little, if at all, by eye of origin. A three-stage model of contrast gain control fitted the results and possessed the properties of ocularity invariance and area invariance owing to its cascade of normalization stages. The implications for a population code for pattern size are discussed.
Resumo:
This paper explores the use of the optimization procedures in SAS/OR software with application to the ordered weight averaging (OWA) operators of decision-making units (DMUs). OWA was originally introduced by Yager (IEEE Trans Syst Man Cybern 18(1):183-190, 1988) has gained much interest among researchers, hence many applications such as in the areas of decision making, expert systems, data mining, approximate reasoning, fuzzy system and control have been proposed. On the other hand, the SAS is powerful software and it is capable of running various optimization tools such as linear and non-linear programming with all type of constraints. To facilitate the use of OWA operator by SAS users, a code was implemented. The SAS macro developed in this paper selects the criteria and alternatives from a SAS dataset and calculates a set of OWA weights. An example is given to illustrate the features of SAS/OWA software. © Springer-Verlag 2009.
Resumo:
SD Apo Lactoferrin-Tobramycin/Gentamicin Combinations are superior to monotherapy in the eradication of Pseudomonas aeruginosa Biofilm in the lungs Wilson Oguejiofor1, Lindsay J. Marshall1, Andrew J. Ingham1, Robert Price2, Jag. Shur2 1School of Life and Health Sciences, Aston University, Birmingham, UK. 2School of Pharmacy and Pharmacology, University of Bath, Bath, UK. KEYWORDS: lactoferrin, apo lactoferrin, spray drying, biofilm, cystic fibrosis Introduction Chronic lung infections from the opportunistic pathogeen Pseudomonas aeruginosa has been recognised as a major contributor to the incidences of high morbidity and mortality amongst cystic fibrosis (CF) patients (1,2). Currently, strategies for managing lung infections in CF patients involves the aggressive use of aerosolised antibiotics (3), however, increasing evidence suggests that the biofilm component of P. aeruginosa in the lower airway remains unperturbed and is associated with the development of antibiotic resistance. If this is so then, there is an urgent need to suitably adjust the current treatment strategy so that it includes compounds that prevent biofilm formation or disrupt established biofilms. It is well understood that biofilm formation is strongly dependent on iron (Fe3+) availability (4), therefore aerosolised anti-infective formulations which has the ability to chelate iron may essentially be a well suited therapy for eliminating P. aeruginosa biofilms on CF airway epithelial cells (5). In this study, we report the use of combination therapy; an aminoglycosides (tobramycin and gentamicin) and an antimicrobial peptide (lactoferrin) to significantly deplete P. aeruginosa biofilms. We demonstrate that lactoferrin-tobramycin and lactoferrin-gentamicin combinations are superior to the single antibiotic regime currently being employed to combat P. aeruginosa biofilms. MATERIALS AND METHOD Antibiotics: The antibiotics used in this study included gentamicin and tobramycin supplied by Fagron, UK. Bacterial strain and growth conditions: Pseudomonas aeruginosa strain PAO1 was provided by Prof. Peter Lambert of Aston University, Birmingham UK. The Strains were routinely grown from storage in a medium supplemented with magnesium chloride, glucose and casamino acids. Dialysis of lactoferrin: Apo lactoferrin was prepared by dialyzing a suspension of lactoferrin for 24 hrs at 4 °C against 20 mmol/L sodium dihydrogen phosphate, 20 mmol/L sodium acetate and 40 mmol/L EDTA (pH 3.5). Ferric ion (Fe3+) removal was verified by atomic absorption spectroscopy measurements. Spray drying of combinations of lactoferrin and apo lactoferrin with the different aminoglycosides: Combinations of tobramycin and gentamicin with the different preparations of lactoferrin were spray dried (SD) as a 2% (w/v) aqueous suspension. The spray drying parameters utilized for the production of suitable micron-sized particles includes: Inlet temperature, 180°C, spray flow rate, 606 L/hr; pump setting, 10%; aspirator setting, 85% (34m3/hr) to produce various outlet temperatures ranging from 99 - 106°C. Viability assay: To test the bactericidal activity of the various combinations, a viability assay was performed as previously described by Xu, Xiong et al. (6) with some modifications. Briefly, 10µL of ~ c. 6.6 x 107 CFU mL-1 P. aeruginosa strain PAO1 suspension were incubated (37°C, 60 mins) with 90 µL of a 2 µg/mL concentration of the various combinations and sampled every 10 mins. After incubation, the cells were diluted in deionised water and plated in Mueller hinton agar plates. Following 24 h incubation of the plates at 37°C, the percentage of viable cells was determined relative to incubation without added antibiotics. Biofilm assay: To test the susceptibility of the P. aeruginosa strain to various antibiotics in the biofilms mode of growth, overnight cultures of P. aeruginosa were diluted 1:100 into fresh medium supplemented with magnesium chloride, glucose and casamino acids. Aliquots of the dilution were dispensed into a 96 well dish and incubated (37°C, 24 h). Excess broth was removed and the number of colony forming units per milliliter (CFU/mL) of the planktonic bacteria was quantified. The biofilms were then washed and stained with 0.1% (w/v) crystal violet for 15 mins at room temperature. Following vigorous washing with water, the stained biofilms were solubilized in 30% acetic acid and the absorbance at 550nm of a 125 µL aliquot was determined in a microplate reader (Multiskan spectrum, Thermo Scientific) using 30% acetic acid in water as the blank. Aliquots of the broth prior to staining were used as an indicator of the level of planktonic growth. RESULTS AND DISCUSSION Following spray drying, the mean yield, volume weighted mean diameter and moisture content of lactoferrin powder were measured and were as follows (Table 1 and table 2); Table 1: Spray drying parameters FormulationInlet temp (°C)Outlet temp (°C)Airflow rate (L/hr)Mean yield (%)Moisture content (%) SD Lactoferrin18099 - 10060645.2 ±2.75.9 ±0.4 SD Apo Lactoferrin180100 - 10260657.8 ±1.85.7 ±0.2 Tobramycin180102 - 10460682.1 ±2.23.2 ±0.4 Lactoferrin + Tobramycin180104 - 10660687.5 ±1.43.7 ±0.2 Apo Lactoferrin + Tobramycin180103 - 10460676.3 ±2.43.3 ±0.5 Gentamicin18099 - 10260685.4 ±1.34.0 ±0.2 Lactoferrin + Gentamicin180102 - 10460687.3 ±2.13.9 ±0.3 Apo Lactoferrin + Gentamicin18099 -10360680.1±1.93.4 ±0.4 Table 2: Particle size distribution d10 d50d90 SD Lactoferrin1.384.9111.08 SD Apo Lactoferrin1.284.7911.04 SD Tobramycin1.254.9011.29 SD Lactoferrin + Tobramycin1.175.2715.23 SD Apo Lactoferrin + Tobramycin1.115.0614.31 SD Gentamicin1.406.0614.38 SD Lactoferrin + Gentamicin1.476.2314.41 SD Apo Lactoferrin + Gentamicin1.465.1511.53 The bactericidal activity of the various combinations were tested against P. aeruginosa PAO1 following a 60 minute incubation period (Figure 1 and Figure 2). While 2 µg/mL of a 1:1 combination of spray dried apo lactoferrin and Gentamicin was able to completely kill all bacterial cells within 40 mins, the same concentration was not as effective for the other antibiotic combinations. However, there was an overall reduction of bacterial cells by over 3 log units by the other combinations within 60 mins. Figure 1: Logarithmic plot of bacterial cell viability of various combinations of tobramycin and lactoferrin preparations at 2µg/mL (n = 3). Figure 2: Logarithmic plot of bacterial cell viability of various combinations of gentamicin and lactoferrin preparations at 2µg/mL (n = 3). Crystal violet staining showed that biofilm formation by P. aeruginosa PAO1 was significantly (ANOVA, p < 0.05) inhibited in the presence of the different lactoferrin preparations. Interestingly, apo lactoferrin and spray dried lactoferrin exhibited greater inhibition of both biofilm formation and biofilm persistence (Figure 2). Figure 2: Crystal violet staining of residual biofilms of P. aeruginosa following a 24hr incubation with the various combinations of antibiotics and an exposure to 48 hr formed biofilms. CONCLUSION In conclusion, combination therapy comprising of an antimicrobial peptide (lactoferrin) and an aminoglycosides (tobramycin or gentamicin) provides a feasible and alternative approach to monotherapy since the various combinations are more efficient than the respective monotherapy in the eradication of both planktonic and biofilms of P. aeruginosa. ACKNOWLEDGEMENT The authors would like to thank Mr. John Swarbrick and Friesland Campina for their generous donation of the Lactoferrin. REFERENCES 1.Hassett, D.J., Sutton, M.D., Schurr, M.J., Herr, A.B., Caldwell, C.C. and Matu, J.O. (2009), "Pseudomonas aeruginosa hypoxic or anaerobic biofilm infections within cystic fibrosis airways". Trends in Microbiology, 17, 130-138. 2.Trust, C.F. (2009), "Antibiotic treatment for cystic fibrosis". Report of the UK Cystic Fibrosis Trust Antibiotic Working Group. Consensus document. London: Cystic Fibrosis Trust. 3.Garcia-Contreras, L. and Hickey, A.J. (2002), "Pharmaceutical and biotechnological aerosols for cystic fibrosis therapy". Advanced Drug Delivery Reviews, 54, 1491-1504. 4.O'May, C.Y., Sanderson, K., Roddam, L.F., Kirov, S.M. and Reid, D.W. (2009), "Iron-binding compounds impair Pseudomonas aeruginosa biofilm formation, especially under anaerobic conditions". J Med Microbiol, 58, 765-773. 5.Reid, D.W., Carroll, V., O'May, C., Champion, A. and Kirov, S.M. (2007), "Increased airway iron as a potential factor in the persistence of Pseudomonas aeruginosa infection in cystic fibrosis". European Respiratory Journal, 30, 286-292. 6.Xu, G., Xiong, W., Hu, Q., Zuo, P., Shao, B., Lan, F., Lu, X., Xu, Y. and Xiong, S. (2010), "Lactoferrin-derived peptides and Lactoferricin chimera inhibit virulence factor production and biofilm formation in Pseudomonas aeruginosa". J Appl Microbiol, 109, 1311-1318.
Resumo:
Purpose: The Nidek F-10 is a scanning laser ophthalmoscope that is capable of a novel fundus imaging technique, so-called ‘retro-mode’ imaging. The standard method of imaging drusen in age-related macular degeneration (AMD) is by fundus photography. The aim of the study was to assess drusen quantification using retro-mode imaging. Methods: Stereoscopic fundus photographs and retro-mode images were captured in 31 eyes of 20 patients with varying stages of AMD. Two experienced masked retinal graders independently assessed images for the number and size of drusen, using purpose-designed software. Drusen were further assessed in a subset of eight patients using optical coherence tomography (OCT) imaging. Results: Drusen observed by fundus photography (mean 33.5) were significantly fewer in number than subretinal deposits seen in retro-mode (mean 81.6; p < 0.001). The predominant deposit diameter was on average 5 µm smaller in retro-mode imaging than in fundus photography (p = 0.004). Agreement between graders for both types of imaging was substantial for number of deposits (weighted ? = 0.69) and moderate for size of deposits (weighted ? = 0.42). Retro-mode deposits corresponded to drusen on OCT imaging in all eight patients. Conclusion: The subretinal deposits detected by retro-mode imaging were consistent with the appearance of drusen on OCT imaging; however, a larger longitudinal study would be required to confirm this finding. Retro-mode imaging detected significantly more deposits than conventional colour fundus photography. Retro-mode imaging provides a rapid non-invasive technique, useful in monitoring subtle changes and progression of AMD, which may be useful in monitoring the response of drusen to future therapeutic interventions.
Resumo:
Modern business trends such as agile manufacturing and virtual corporations require high levels of flexibility and responsiveness to consumer demand, and require the ability to quickly and efficiently select trading partners. Automated computational techniques for supply chain formation have the potential to provide significant advantages in terms of speed and efficiency over the traditional manual approach to partner selection. Automated supply chain formation is the process of determining the participants within a supply chain and the terms of the exchanges made between these participants. In this thesis we present an automated technique for supply chain formation based upon the min-sum loopy belief propagation algorithm (LBP). LBP is a decentralised and distributed message-passing algorithm which allows participants to share their beliefs about the optimal structure of the supply chain based upon their costs, capabilities and requirements. We propose a novel framework for the application of LBP to the existing state-of-the-art case of the decentralised supply chain formation problem, and extend this framework to allow for application to further novel and established problem cases. Specifically, the contributions made by this thesis are: • A novel framework to allow for the application of LBP to the decentralised supply chain formation scenario investigated using the current state-of-the-art approach. Our experimental analysis indicates that LBP is able to match or outperform this approach for the vast majority of problem instances tested. • A new solution goal for supply chain formation in which economically motivated producers aim to maximise their profits by intelligently altering their profit margins. We propose a rational pricing strategy that allows producers to earn significantly greater profits than a comparable LBP-based profitmaking approach. • An LBP-based framework which allows the algorithm to be used to solve supply chain formation problems in which goods are exchanged in multiple units, a first for a fully decentralised technique. As well as multiple-unit exchanges, we also model in this scenario realistic constraints such as factory capacities and input-to-output ratios. LBP continues to be able to match or outperform an extended version of the existing state-of-the-art approach in this scenario. • Introduction of a dynamic supply chain formation scenario in which participants are able to alter their properties or to enter or leave the process at any time. Our results suggest that LBP is able to deal easily with individual occurences of these alterations and that performance degrades gracefully when they occur in larger numbers.
Resumo:
Magnetoencephalography (MEG), a non-invasive technique for characterizing brain electrical activity, is gaining popularity as a tool for assessing group-level differences between experimental conditions. One method for assessing task-condition effects involves beamforming, where a weighted sum of field measurements is used to tune activity on a voxel-by-voxel basis. However, this method has been shown to produce inhomogeneous smoothness differences as a function of signal-to-noise across a volumetric image, which can then produce false positives at the group level. Here we describe a novel method for group-level analysis with MEG beamformer images that utilizes the peak locations within each participant's volumetric image to assess group-level effects. We compared our peak-clustering algorithm with SnPM using simulated data. We found that our method was immune to artefactual group effects that can arise as a result of inhomogeneous smoothness differences across a volumetric image. We also used our peak-clustering algorithm on experimental data and found that regions were identified that corresponded with task-related regions identified in the literature. These findings suggest that our technique is a robust method for group-level analysis with MEG beamformer images.
Resumo:
Context/Motivation - Different modeling techniques have been used to model requirements and decision-making of self-adaptive systems (SASs). Specifically, goal models have been prolific in supporting decision-making depending on partial and total fulfilment of functional (goals) and non-functional requirements (softgoals). Different goalrealization strategies can have different effects on softgoals which are specified with weighted contribution-links. The final decision about what strategy to use is based, among other reasons, on a utility function that takes into account the weighted sum of the different effects on softgoals. Questions/Problems - One of the main challenges about decisionmaking in self-adaptive systems is to deal with uncertainty during runtime. New techniques are needed to systematically revise the current model when empirical evidence becomes available from the deployment. Principal ideas/results - In this paper we enrich the decision-making supported by goal models by using Dynamic Decision Networks (DDNs). Goal realization strategies and their impact on softgoals have a correspondence with decision alternatives and conditional probabilities and expected utilities in the DDNs respectively. Our novel approach allows the specification of preferences over the softgoals and supports reasoning about partial satisfaction of softgoals using probabilities. We report results of the application of the approach on two different cases. Our early results suggest the decision-making process of SASs can be improved by using DDNs. © 2013 Springer-Verlag.
Resumo:
Supply chain formation is the process by which a set of producers within a network determine the subset of these producers able to form a chain to supply goods to one or more consumers at the lowest cost. This problem has been tackled in a number of ways, including auctions, negotiations, and argumentation-based approaches. In this paper we show how this problem can be cast as an optimization of a pairwise cost function. Optimizing this class of energy functions is NP-hard but efficient approximations to the global minimum can be obtained using loopy belief propagation (LBP). Here we detail a max-sum LBP-based approach to the supply chain formation problem, involving decentralized message-passing between supply chain participants. Our approach is evaluated against a well-known decentralized double-auction method and an optimal centralized technique, showing several improvements on the auction method: it obtains better solutions for most network instances which allow for competitive equilibrium (Competitive equilibrium in Walsh and Wellman is a set of producer costs which permits a Pareto optimal state in which agents in the allocation receive non-negative surplus and agents not in the allocation would acquire non-positive surplus by participating in the supply chain) while also optimally solving problems where no competitive equilibrium exists, for which the double-auction method frequently produces inefficient solutions. © 2012 Wiley Periodicals, Inc.
Resumo:
Determining the Ordered Weighted Averaging (OWA) operator weights is important in decision making applications. Several approaches have been proposed in the literature to obtain the associated weights. This paper provides an alternative disparity model to identify the OWA operator weights. The proposed mathematical model extends the existing disparity approaches by minimizing the sum of the deviation between two distinct OWA weights. The proposed disparity model can be used for a preference ranking aggregation. A numerical example in preference ranking and an application in search engines prove the usefulness of the generated OWA weights.
Resumo:
This thesis examines the ways Indonesian politicians exploit the rhetorical power of metaphors in the Indonesian political discourse. The research applies the Conceptual Metaphor Theory, Metaphorical Frame Analysis and Critical Discourse Analysis to textual and oral data. The corpus comprises: 150 political news articles from two newspapers (Harian Kompas and Harian Waspada, 2010-2011 edition), 30 recordings of two television news and talk-show programmes (TV-One and Metro-TV), and 20 interviews with four legislators, two educated persons and two laymen. For this study, a corpus of written bahasa Indonesia was also compiled, which comprises 150 texts of approximately 439,472 tokens. The data analysis shows the potential power of metaphors in relation to how politicians communicate the results of their thinking, reasoning and meaning-making through language and discourse and its social consequences. The data analysis firstly revealed 1155 metaphors. These metaphors were then classified into the categories of conventional metaphor, cognitive function of metaphor, metaphorical mapping and metaphor variation. The degree of conventionality of metaphors is established based on the sum of expressions in each group of metaphors. Secondly, the analysis revealed that metaphor variation is influenced by the broader Indonesian cultural context and the natural and physical environment, such as the social dimension, the regional, style and the individual. The mapping system of metaphor is unidirectionality. Thirdly, the data show that metaphoric thought pervades political discourse in relation to its uses as: (1) a felicitous tool for the rhetoric of political leaders, (2) part of meaning-making that keeps the discourse contexts alive and active, and (3) the degree to which metaphor and discourse shape the conceptual structures of politicians‟ rhetoric. Fourthly, the analysis of data revealed that the Indonesian political discourse attempts to create both distance and solidarity towards general and specific social categories accomplished via metaphorical and frame references to the conceptualisations of us/them. The result of the analysis shows that metaphor and frame are excellent indicators of the us/them categories which work dialectically in the discourse. The acts of categorisation via metaphors and frames at both textual and conceptual level activate asymmetrical concepts and contribute to social and political hierarchical constructs, i.e. WEAKNESS vs.POWER, STUDENT vs. TEACHER, GHOST vs. CHOSEN WARRIOR, and so on. This analysis underscores the dynamic nature of categories by documenting metaphorical transfers between, i.e. ENEMY, DISEASE, BUSINESS, MYSTERIOUS OBJECT and CORRUPTION, LAW, POLITICS and CASE. The metaphorical transfers showed that politicians try to dictate how they categorise each other in order to mobilise audiences to act on behalf of their ideologies and to create distance and solidarity.
Resumo:
Supply chain formation (SCF) is the process of determining the set of participants and exchange relationships within a network with the goal of setting up a supply chain that meets some predefined social objective. Many proposed solutions for the SCF problem rely on centralized computation, which presents a single point of failure and can also lead to problems with scalability. Decentralized techniques that aid supply chain emergence offer a more robust and scalable approach by allowing participants to deliberate between themselves about the structure of the optimal supply chain. Current decentralized supply chain emergence mechanisms are only able to deal with simplistic scenarios in which goods are produced and traded in single units only and without taking into account production capacities or input-output ratios other than 1:1. In this paper, we demonstrate the performance of a graphical inference technique, max-sum loopy belief propagation (LBP), in a complex multiunit unit supply chain emergence scenario which models additional constraints such as production capacities and input-to-output ratios. We also provide results demonstrating the performance of LBP in dynamic environments, where the properties and composition of participants are altered as the algorithm is running. Our results suggest that max-sum LBP produces consistently strong solutions on a variety of network structures in a multiunit problem scenario, and that performance tends not to be affected by on-the-fly changes to the properties or composition of participants.