4 Method Improvements

The existing population synthesis procedures have many limitations. In this chapter, the following concerns are discussed:

The chapter is structured largely as a discussion of these issues, together with an attempt to brainstorm solutions to these limitations, focusing particularly on the IPF-based methods; not all ideas are entirely successful. The new methods are intended to be largely independent, but most can be combined if desired. In the following chapters, several of these new methods are implemented and evaluated.

1 Simplifying the PUMS

Table 4.1: Illustration of relationship between sparsity and number of dimensions/cross-classification attributes. Reading from the top, the table shows increasing sparsity as a sample is cross-classified using a larger number of attributes. Alternatively, the table can be read from the bottom: starting with an 8-way contingency table, one dimension is collapsed at a time, giving a progressively denser table. By the time the 8-way table has been collapsed to a 3-way table, the majority of cells are non-zero. Data: 1986 Census PUMS of $ n=9061$ families in the Toronto CMA, cross-classified using dimensions CFSTRUC (3 categories), TENURE (2), ROOM (10), NUCHILD (9), AGEF (9), LFACTF (5), AGEM (9) and LFACTM (5).
# of     Cells Equal To
Attributes # Cells Median 0 1 2+
1 3 956 0.0% 0.0% 100.0%
2 6 478 0.0% 0.0% 100.0%
3 54 30 5.6% 1.9% 92.6%
4 486 0 57.8% 7.8% 34.4%
5 4374 0 84.7% 4.0% 11.3%
6 21870 0 93.9% 2.4% 3.7%
7 196830 0 98.9% 0.5% 0.6%
8 984150 0 99.7% 0.1% 0.1%

The IPF method has been applied to tables with a large number of dimensions, as many as eight. Such high dimensional tables are almost always sparse, with the vast majority of the cells in the table containing zeros. (High dimensional tables are generally notorious for their difficulties and differences from two-dimensional tables; see [12] for “ubiquitous” ill-behaved examples.) In these sparse tables, a large fraction of the non-empty cells contain only one observation, and the number of cells is often much larger than the number of observations. In other words, the sample does not provide a statistically meaningful estimate of the probability distribution for such a high dimensional table. However, the high-dimensional table can be collapsed to produce 2D or 3D tables, each of which is adequately sampled and gives a statistically valid distribution of counts. This is illustrated in Table 4.1, where 99.7% of the cells are zero in the 8-way table. The two-way and three-way margins of the table shown ( $ \textsc{Cfstruc}\times \textsc{Tenure}$ and $ \textsc{Cfstruc}\times \textsc{Tenure}\times \textsc{Room}$) have 0% and 5.6% zeros, and median cell counts of 90 and 30 respectively. Of course, an 8-way table has many other lower-dimensional margins, and only one such choice is illustrated in Table 4.1. (When collapsing a $ d$-way table to $ d'$ dimensions, there are $ {d \choose d'}$ possible choices of variables to keep in the low dimensional table.) Consequently, while a sparse 8-way table does not provide statistically testable 8-variable interaction, it could be viewed as a means of linking the many 2- or 3-way tables formed by its margins.

The cell counts in the high dimensional table say very little statistically; in fact, half of the non-zero cells contain just one observation. The most important information in an 8-way table is not necessarily the counts in the cells, but rather the co-ordinates of the non-zero cells, which determine how each cell influences the various low-dimensional margins. In this manner, the importance of the high-dimensional table is in many ways its sparsity pattern, which provides the link between the many 2- and 3-way tables. However, the sparsity pattern in the high dimensional table is in some ways an artefact of sampling. Some of the zeros in the table represent structural zeros: for example, it is essentially impossible for a family to exist where the wife's age is 15, her education is university-level and the family has eight children. Other zeros may be sampling zeros where the sample did not observe a particular combination of variables, but it does exist in the population.

All of the population synthesis strategies reviewed in Chapter 2 preserved the sparsity pattern of the high-dimensional PUMS. It would be useful if the synthesis procedure could account for the uncertainty surrounding the sparsity pattern. One strategy for resolving this would be to simplify the PUMS, building a variant that fits the low-dimensional margins but makes no assumptions about the associations or sparsity pattern in high dimensions where sampling is inadequate. The primary goal would be to correct sampling zeros in the high-dimensional PUMS, replacing them with small positive counts.

Figure 4.1: Simplifying the PUMS by removing high-dimensional associations. A five-dimensional PUMS is simplified by removing four- and five-way associations. This is achieved by fitting a uniform probability table to the complete set of 3-way margins of the PUMS. The resulting table has the same 1-, 2- and 3-way margins as the original PUMS, but has no 4- or 5-way association.
Image figure_meth_simplify

This strategy is illustrated in Figure 4.1. IPF is applied on a high-dimensional table, with the source sample set to a uniform distribution. The constraints applied to IPF are the full set of (say) 3-way cross-tabulations of the PUMS. For a 5-dimensional problem, this is equal to $ {5 \choose 3}=10$ separate constraints. After IPF, the simplified PUMS matches all of the 3-way margins from the PUMS, but includes no 4-way or 5-way interaction between variables. In this example, a 5-way table is shown for illustration, but the intention is for higher-dimensional tables to be simplified. Also, 3-way margins of the PUMS are used as constraints for illustrative purposes, but the actual margins chosen could be selected using tests of statistically significant interaction; If a log-linear model shows some significant 4-way interactions in the PUMS, then these could be included as margins.

This method does indeed reduce the number of zeros in the high-dimensional table, correcting for sampling zeros and capturing some unusual individuals that were not included in the original PUMS. However, this strategy has significant downsides. In particular, it makes no distinction between structural and sampling zeros at high dimensions: there may be structural zeros involving four or more variables that do not appear in 3-way tables, but they will be treated the same as sampling zeros and be filled in with small positive counts.

Furthermore, it makes the entire population synthesis procedure more difficult, since agents can be synthesized who did not exist in the original PUMS. This makes the technique difficult to combine with some of the other methodological improvements discussed here.

2 Sparse List-Based Data Structure

For a microsimulation model spanning many aspects of society and the economy it is useful to be able to associate a range of attributes with each agent. Different attributes are useful for different aspects of the agent's behaviour. For a person agent, labour force activity, occupation, industry and income attributes are useful for understanding his/her participation in the labour force. Meanwhile, age, marital status, gender and education attributes might be useful for predicting demographic behaviour.

However, as more attributes are associated with an agent, the number of cells in the corresponding multiway contingency table grows exponentially. A multiway contingency table representing the association pattern between attributes has $ I
\times J \times K \times \ldots$ cells. If a new attribute with $ L$ categories is added, then $ L$ times more cells are needed. Asymptotically, the storage space is exponential in the number of attributes. As a result, fitting more than eight attributes with a multizone IPF procedure typically requires more memory than available on a desktop computer. However, the table itself is cross-classifying a fixed number of observations (i.e., a PUMS), and is extremely sparse when a large number of attributes are included as shown in Table 4.1. Is there a way that this sparsity can be exploited to allow the synthesis to create a large number of attributes?

Sparsity is a familiar problem in numerical methods. Many branches of science store large sparse 2D matrices using special data structures that hold only the non-zero sections of the matrix, instead of using a complete array that includes cells for every zero. In the data warehousing field, for example, multiway tables were initially stored in complete arrays accessed through a procedure called MOLAP (Multidimensional On-line Analytical Processing). More recently, that field has shifted to ROLAP (Relational OLAP) methods that allow data to be stored in its natural microdata form while retaining the ability to answer queries efficiently [8].

The IPF method itself has little impact on the sparsity pattern of a table. After the first iteration of fits to the constraints, the final sparsity pattern is essentially known; some cells may eventually converge to near-zero values, but few other changes occur. Once a cell is set to zero, it remains zero for the remainder of the procedure. Also, the IPF algorithm does not require a complete representation of the underlying table. All that it needs are three operations (described later), which could be implemented using either a complete or sparse representation of the data.

Table 4.2: Comparison of memory requirements for implementations of an agent synthesis procedure using a complete array or a sparse list. The Zone-by-Zone columns show the memory requirements when synthesizing $ d$ attributes, all present in the PUMS; the Multizone columns show the storage requirements for $ d$ PUMS attributes plus one non-PUMS attribute, a zone number within the PUMA. Data: PUMS of $ n=9061$ census families in the 1986 Toronto CMA, cross-classified using variables CFSTRUC (3 categories), TENURE (2), ROOM (10), NUCHILD (9), AGEF (9), LFACTF (5), AGEM (9), LFACTM (5), CHILDA (3), CHILDB (4), CHILDC (3), CHILDD (3) and CHILDE (3). The geography variable CTCODE is divided into $ K=731$ zones.
# PUMS          
Attributes Complete Storage   Sparse List
$ d$ Zone-by-Zone Multizone   Zone-by-Zone Multizone
1 0.00 MB 0.0 MB   0.05 MB 26.5 MB
2 0.00 MB 0.0 MB   0.05 MB 26.5 MB
3 0.00 MB 0.2 MB   0.06 MB 26.5 MB
4 0.00 MB 1.6 MB   0.07 MB 26.5 MB
5 0.02 MB 14.2 MB   0.08 MB 26.5 MB
6 0.10 MB 71.1 MB   0.09 MB 26.5 MB
7 0.87 MB 639.5 MB   0.10 MB 26.6 MB
8 4.37 MB 3,197.4 MB   0.11 MB 26.6 MB
9 13.12 MB 9,592.2 MB   0.12 MB 26.6 MB
10 52.49 MB 38,368.7 MB   0.13 MB 26.6 MB
11 157.46 MB 115,106.2 MB   0.14 MB 26.6 MB
12 472.39 MB 345,318.6 MB   0.14 MB 26.6 MB
13 1,417.18 MB 1,035,955.7 MB   0.15 MB 26.6 MB

The benefits of using a sparse data structure are substantial. Williamson et al. presented many of the arguments when describing their Combinatorial Optimization method: efficiency in space, flexibility of aggregation and easier linking of data sources [57]. In terms of efficiency, the method described here allows the IPF algorithm to be implemented using storage proportional to the number of non-zero cells in the initial table. For agent synthesis with the zone-by-zone method, this is proportional to $ n$ (the number of observations in the PUMS) multiplied by $ d$ (the number of attributes to fit). The multizone method combines several IPF stages, and requires considerably more memory: a similar $ \mathcal{O}(nd)$ in the first stage, but $ \mathcal{O}(n(d+K))$ in the second stage, where $ K$ is the number of zones. This is illustrated in Table 4.2.

Table 4.3: Format of a sparse list-based data structure for Iterative Proportional Fitting As shown, each row corresponds to a PUMS entry. The columns give the co-ordinates of each PUMS entry within the high-dimensional array. Each row also stores (a) a single weight when synthesizing only PUMS attributes (e.g., a zone-by-zone IPF); or (b) a set of weights, corresponding to the categories of a non-PUMS attribute (e.g., a multizone IPF where the non-PUMS attribute defines a zone (census tract) code within the PUMA).
1   Husband-wife 7 Owned 3 $ \ldots$ 1   81.8
2   Lone female parent 4 Rented 0 $ \ldots$ 0   70.9
3   Husband-wife 9 Rented 0 $ \ldots$ 0   54.8
4   Husband-wife 9 Owned 0 $ \ldots$ 0   86.2
$ \ldots$   $ \ldots$ $ \ldots$ $ \ldots$ $ \ldots$ $ \ldots$ $ \ldots$   $ \ldots$
9060   Husband-wife 9 Rented 0 $ \ldots$ 0   64.8
9061   Husband-wife 6 Rented 0 $ \ldots$ 0   100.3

    Co-ordinates   Weight
1   Husband-wife 7 Owned 3 $ \ldots$ 1   0.000 0.121 $ \ldots$ 0.021
2   Lone female parent 4 Rented 0 $ \ldots$ 0   0.000 0.212 $ \ldots$ 0.020
3   Husband-wife 9 Rented 0 $ \ldots$ 0   0.000 0.244 $ \ldots$ 0.143
4   Husband-wife 9 Owned 0 $ \ldots$ 0   0.002 0.037 $ \ldots$ 0.019
$ \ldots$   $ \ldots$ $ \ldots$ $ \ldots$ $ \ldots$ $ \ldots$ $ \ldots$   $ \ldots$ $ \ldots$ $ \ldots$  
9060   Husband-wife 9 Rented 0 $ \ldots$ 0   0.000 0.349 $ \ldots$ 0.011
9061   Husband-wife 6 Rented 0 $ \ldots$ 0   0.004 0.213 $ \ldots$ 0.074

There are many types of data structures that could be used to represent a sparse high dimensional contingency table. The data structure proposed here is not the most efficient, but is conceptually simple. It borrows directly from Williamson's Combinatorial Optimization method: the data is represented as a list of the PUMS microdata entries, with a weight attached to each. The weight is an expansion factor, representing the number of times to replicate that record to form a complete population. Williamson's representation includes only integer weights, and operates on a zone-by-zone basis. The new approach described here behaves exactly like IPF and hence allows fractional weights. With a small extension, it can also supports multiple zones: instead of attaching one weight to each PUMS entry, $ K$ weights are attached with one for each zone. An illustration of the format of the data structure is shown in Table 4.3.

As Williamson et al. pointed out, flexible aggregation is a real advantage of a list-based representation. Complete array storage is proportional to the number of categories used for each attributes, while the sparse storage scheme is not affected by the categorization of the attributes. Many applications of IPF that used complete arrays were forced to abandon detailed categorization schemes to conserve space and allow more attributes to be synthesized (e.g., [2]). This in turn makes it difficult to apply several margins, since different margins may categorize a single attribute differently. When a large number of categories are possible, however, the attribute can be represented with a fine categorization and collapsed to different coarse categorizations as required during the fitting procedure.

1 Algorithmic Details

The operation of a typical IPF algorithm was presented earlier in Figure 2.4. To implement this procedure with the sparse list structure, the following operations are necessary:

If the target population margins remain stored as a complete array, these operations are relatively straightforward. The collapse operation can be done in a single pass over the list, using the category numbers in each list row as co-ordinates into the complete array that stores the collapsed table. The update operation can likewise be done in a single pass over the list. All of these operations are fast, with complexity equal to the storage cost, $ \mathcal{O}(nd)$. (The method is basically the same when non-PUMS attributes are included and multiple weights are stored per row in the list. The non-PUMS attribute must be handled as a special case, since it is stored slightly differently. Since there are $ K$ times more weights, the computation cost grows proportionally to $ \mathcal{O}(n(d+K))$, again matching the storage cost.)

The only tricky part of the procedure is setting the initial weights. When using a complete array representation for zone-by-zone methods, the initial table needs to follow the distribution of the PUMS, while for Beckman et al.'s multizone method, the initial table needs to be a uniform distribution, usually by setting all cells to one. In the sparse representation described here, the situation is reversed because there is one row per PUMS entry, instead of one cell grouping many PUMS entries. As a result, the initial weights need to be 1.0 for the zone-by-zone method. For the multizone method, the sum of the weights in the $ r$ rows that contribute to a single “cell” in the complete table needs to add to one. Therefore, the individual weights should be $ 1/r$. The tricky part is finding how many rows share a cell, since the data is not structured for this purpose. To achieve this, the list is sorted by the table dimensions, which has the effect of grouping rows contributing to a single cell. In this manner, the initial weights can be set in $ \mathcal{O}(n \log n)$ time.

Additionally, the multizone procedure requires a fit to the distribution of all non-geographic variables simultaneously. (See the $ \mathbf{\hat{N}}_{ij+}$ margin on the right half of Figure 2.6.) This margin is high-dimensional—it includes all variables except for the geographic variable. However, it is also sparse, and is in fact computed through an IPF procedure. As a result, it is already stored as a sparse list with a single weight per row. This weight can be treated as a constraint on the total for the weights in each row, and suitable collapse/update procedures are then easily defined. The computation cost is still $ \mathcal{O}(n(d+K))$ for this type of constraint.

Finally, the Monte Carlo integerization for this sparse structure is quite simple, and little changed. The list of weights (or 2D array of weights for the multizone procedure) is normalized and treated as a probability mass function, and individual rows (cells for multizone) are synthesized using Monte Carlo draws.

2 Discussion

This sparse data structure removes a substantial limitation from the IPF algorithm, but also raises new questions. Is there a limit to the number of attributes that can be synthesized? If there is a limit, how is it related to the size $ n$ of the PUMS sample?

The answers to these questions remain elusive. Ultimately, the addition of a new attribute is a decision to increase the dimension of the multiway contingency table. As discussed earlier, the behaviour of this high-dimensional table and its relationship to lower-dimensional margins remains poorly understood.

3 Fitting to Randomly Rounded Margins

Many census agencies apply random rounding procedures to published tables, including the agencies in Canada, the United Kingdom and New Zealand. Each agency has a base $ b$ that it uses, and then modifies a cell count $ N_{i+}$ by rounded up to the nearest multiple of $ b$ with a probability $ p$, or down with a probability $ 1-p$. In most applications, a procedure called unbiased random rounding is used, where $ p=(N_{i+} \bmod b)/b$. The alternative is called unrestricted random rounding, where $ p$ is constant and independent of the cell values; for example, with $ p=0.5$ it is equally likely that a cell will be rounded up or down.

For example, cells and marginal totals in Canadian census tables are randomly rounded up or down to a multiple of $ b=5$ using the unbiased procedure. For a cell with a count of $ N_{i+}=34$, there is a 20% probability that it is published as $ \tilde{N}_{i+}=30$ and an 80% probability that it is published as $ \tilde{N}_{i+}=35$. Most importantly, the expected value is equal to that of the unrounded count; it is therefore an unbiased random rounding procedure.

As discussed by Huang & Williamson [28], this can lead to conflicts between tables: two different cross-tabulations of the same variable or set of variables may be randomly rounded to different values. The standard IPF procedure will not converge in this situation. The procedure is also unable to take into account the fact that margins do not need to be fitted exactly, since there is a reasonable chance that the correct count is within $ \pm 4$ of the reported count.

1 Modified Termination Criterion

Using the termination criterion of Figure 2.4 (line 10), the IPF procedure will not necessarily terminate if two randomly rounded margins conflict. The termination criterion shown requires the fitted table to match all margins simultaneously:

$\displaystyle \delta$ $\displaystyle = \max\left( \displaystyle\max_i \left\vert\hat{N}_{i+}^{(\tau + ...
...playstyle\max_j \left\vert\hat{N}_{+j}^{(\tau + 2)} - N_{+j}\right\vert \right)$ (20)

Instead of requiring a fit, the algorithm could terminate when the net effect of one iteration drops below a threshold. That is,

$\displaystyle \delta$ $\displaystyle = \max\left( \displaystyle\max_{i} \left\vert \hat{N}_{i+}^{(\tau...
... \left\vert \hat{N}_{+j}^{(\tau+2)} - \hat{N}_{+j}^{(\tau)} \right\vert \right)$ (21)

or even

$\displaystyle \delta$ $\displaystyle = \displaystyle\max_{i,j} \left\vert \hat{N}_{ij}^{(\tau+2)} - \hat{N}_{ij}^{(\tau)} \right\vert$ (22)

The intention here is to terminate the algorithm when the change in error in the margins drops below a threshold, instead of the absolute error.

2 Hierarchical Margins

For each cross-tabulation, statistical agencies publish a hierarchy of margins, and these margins are rounded independently of the cells in the table. For a three-way table $ N_{ijk}$ randomly rounded to give $ \tilde{N}_{ijk}$, the data release will also include randomly rounded two-way margins $ \tilde{N}_{ij+}$, $ \tilde{N}_{i+k}$ and $ \tilde{N}_{+jk}$, one-way margins $ \tilde{N}_{i++}$, $ \tilde{N}_{+j+}$ and $ \tilde{N}_{++k}$, and a zero-way total $ \tilde{N}_{+++}$. The sum of the cells does not necessarily match the marginal total. For example, the sum $ \sum_k{\tilde{N}_{ijk}}$ includes $ K$ randomly rounded counts. The expected value of this sum is the true count $ N_{ij+}$, but the variance is large and the sum could be off by as much as $ K(b-1)$ in the worst case. By contrast, the reported marginal total $ \tilde{N}_{ij+}$ also has the correct expected value, but its error is at most $ b-1$.

For this reason, it seems sensible to include the hierarchical margins in the fitting procedure, in addition to the detailed cross-tabulation itself.

3 Projecting onto Feasible Range

As described in equations (2.6) and (2.1), the IPF procedure minimizes the Kullback-Leibler divergence $ I({\bf\hat{N}} \Vert {\bf n})$,

$\displaystyle \sum_i{\sum_j{\hat{N}_{ij}\log(\hat{N}_{ij}/n_{ij}) }}$    

while satisfying the marginal constraints

$\displaystyle \sum_j \hat{N}_{ij} = \tilde{N}_{i+},\quad \sum_i \hat{N}_{ij} = \tilde{N}_{+j}$    

To handle random rounding, the marginal constraints could instead be treated as inequalities,

$\displaystyle \left\vert\sum_j \hat{N}_{ij} - \tilde{N}_{i+}\right\vert \leq b-1,\quad \left\vert\sum_i \hat{N}_{ij} - \tilde{N}_{+j}\right\vert \leq b-1$ (23)

That is, any value within the range $ \tilde{N}_{i+} \pm (b-1)$ is an acceptable solution, with no preference for any single value within that range.

Dykstra's generalization of IPF [17] provides some fruitful ideas for handling this type of constraint. Csiszár [13] described the IPF procedure as a series of projections onto the subspace defined by each constraint. Csiszár was not working in a $ d$-dimensional space (where $ d$ is the number of attributes being fitted), but in a $ C$-dimensional space (where $ C$ is the number of cells in the table) representing all possible probability distributions, which has since been called $ I$-space.

Nevertheless, the idea of projection is still useful: each iteration of IPF is a modification of the probability distribution to fit a margin. It is a “projection” in that it finds the “closest” probability distribution in terms of Kullback-Leibler divergence, just as the projection of a point onto a plane finds the closest point on the plane in terms of Euclidean distance. (Note, however, that Kullback-Leibler divergence is not a true distance metric.)

Csiszár only considered equality constraints. Dykstra extended Csiszár's method to include a broader range of constraints: any closed convex set in $ I$-space. This class of constraints appears to include the desired inequality constraints defined by Equation 4.4. Dykstra's method for applying these constraints is also a projection procedure, finding the set of counts that satisfy the constraint while minimizing the Kullback-Leibler divergence.

To give an example, consider the algorithm of Figure 2.4. Line 5 would be replaced with

$\displaystyle \hat{N}_{ij}^{(\tau+1)} = \begin{cases}\hat{N}_{ij}^{(\tau)} \fra...
...tilde{N}_{i+} + (b-1) \\ \hat{N}_{ij}^{(\tau)} & \textrm{otherwise} \end{cases}$ (24)

(and likewise for line 8).

Table 4.4: Relationship between unknown true count and the randomly rounded count published by the statistical agency. The table shows the probability distribution for unrounded count $ N_{i+}$ given published randomly rounded count $ \tilde{N}_{i+}$, assuming base $ b=5$.
$ \Delta$ $ P(N_{i+} = \tilde{N}_{i+} + \Delta \,\vert\, \tilde{N}_{i+})$
-5 0%
-4 4%
-3 8%
-2 12%
-1 16%
0 20%
1 16%
2 12%
3 8%
4 4%
5 0%

However, this projection procedure has its own problems. The standard IPF procedure ignores the probability distribution associated with each marginal value and uses only the published cell count. The projection procedure described here suffers from the opposite problem: it focuses on the range of possible values, without acknowledging that one outcome is known to be more likely than the others. To see this, consider the probability distribution for the unrounded value $ N_{i+}$ given the published value $ \tilde{N}_{i+}$ shown in Table 4.4. The distribution is triangular, with a strong central peak. The projection algorithm forces the fit to match the range $ \pm (b-1)$ of this distribution, but it treats all values inside this range as equally probable. This would be suitable if the census used unrestricted random rounding, but not for the more typical case where unbiased rounding is used.

4 Synthesizing Agent Relationships

Suppose that a population of person agents has been synthesized, with a limited amount of information about their relationships in families (such as a CFSTRUC, which classifies a person as married, a lone parent, a child living with parent(s), or a non-family person). In the absence of any information about how families form, the persons could be formed into families in a naïve manner: randomly select male married persons and attach them to female married persons, and randomly attach children to couples or lone parents. Immediately, problems would emerge: some persons would be associated in implausible manners, such as marriages with age differences over 50 years, marriages between persons living at opposite ends of the city, or parents who are younger than their children.

A well-designed relationship synthesis procedure should carefully avoid such problems. A good choice of relationships satisfies certain constraints between agents' attributes, such as the mother being older than her child, or the married couple living in the same zone. It also follows known probability distributions, so that marriages with age differences over 50 years have a low but non-zero incidence.

Most constraints and probability distributions are observed in microdata samples of aggregate agents, such as families and households. A complete Family PUMS includes the ages of mothers and children, and none of the records includes a mother who is younger than her children.3 Similarly, only a small fraction of the records include marriages between couples with ages differing by more than 50 years. The question, however, is one of method: how can relationships between agents be formed to ensure that the desired constraints are satisfied?

Guo & Bhat [26] used a top-down approach, synthesizing a household first and then synthesizing individuals to connect to the household. The attributes used to link the two universes were gender and age: the gender of the husband/wife or lone parent are known, and coarse constraints on the age of the household head (15-64 or 65+) and children (some 0-18 or all 18+). These constraints are quite loose, and no constraint is enforced between the husband/wife's ages or parent/child ages.

Guan [25] used a bottom-up approach to build families, with slightly stronger constraints. The persons were synthesized first, and then assembled to form families. Children are grouped together (and constrained to have similar ages), then attached to parents. Constraints between parent/child ages and husband/wife ages were included, although there are some drawbacks to the method used for enforcement. Guan likewise used a bottom-up approach to combine families and non-family persons into households.

Arentze & Timmermans [2] only synthesized a single type of agent, the household. Their synthesis included the age and labour force activity of both husband and wife, and the linkage to the number of children in the household. They did not connect this to a separate synthesis of persons with detailed individual attributes, but by synthesizing at an aggregate level, they guaranteed that the population was consistent and satisfied key constraints between family members.

Both Guo & Bhat and Guan's procedures suffer from inconsistencies between the aggregate and disaggregate populations. The family population may contain 50 husband-wife families in zone $ k$ where the husband has age $ i$, while the person population contains only 46 married males of age $ i$ in zone $ k$. In the face of such inconsistencies, either families or persons must be changed: a family could be attached to a male of age $ i' \neq i$, or a person could be modified to fit the family. In both cases, either the family or person population is deemed “incorrect” and modified. The editing procedures are difficult to perform, and inherently ad hoc. Furthermore, as the number of overlapping attributes between the two populations grows, inconsistencies become quite prevalent.

What are the sources of these inconsistencies? They come from two places: first, the fitting procedure used to estimate the population distribution $ \mathbf{\hat{N}}^P$ for persons and $ \mathbf{\hat{N}}^F$ for families may not give the same totals for a given set of common attributes. Second, even if $ \mathbf{\hat{N}}^P$ and $ \mathbf{\hat{N}}^F$ agree on all shared attributes, the populations produced by Monte Carlo synthesis may not agree, since the Monte Carlo procedure is non-deterministic. In the following sections, a method is proposed to resolve these two issues.

1 Fitting Populations Together

For the purposes of discussion, consider a simple synthesis example: synthesizing husband-wife families. Suppose that the universe of persons includes all persons, with attributes for gender $ \textsc{Sexp}(g)$, family status $ \textsc{Cfstruc}(h)$, age $ \textsc{Agep}(i)$, education $ \textsc{Hlosp}(j)$ and zone $ \textsc{Ctcode}(k)$. The universe of families includes only husband-wife couples, with attributes for the age of husband $ \textsc{Agem}(i_m)$ and wife $ \textsc{Agef}(i_f)$, and zone $ \textsc{Ctcode}(k)$. IPF has already been used to estimate the contingency table cross-classifying persons ( $ \mathbf{\hat{N}}^P_{ghijk}$) and likewise for the table of families ( $ \mathbf{\hat{N}}^F_{i_m i_f k}$). The shared attributes between the two populations are age and zone, and implicitly gender. The two universes do not overlap directly, since only a fraction of the persons belong to husband-wife families; the others may be lone parents, children, or non-family persons, and are categorized as such using the $ \textsc{Cfstruc}$ attribute.

In order for consistency between $ \mathbf{\hat{N}}^P$ and $ \mathbf{\hat{N}}^F$, the following must be met for $ h=\textit{husband-wife}$ and any choice of $ i,k$:

$\displaystyle \hat{N}^P_{ghi+k}$ $\displaystyle = \begin{cases}\hat{N}^F_{i+k} \quad & \textrm{for $g$={\it male}} \\ \hat{N}^F_{+ik} \quad & \textrm{for $g$={\it female}} \ \end{cases}$ (25)

That is, the number of married males of age $ i$ in zone $ k$ must be the same as the number of husband-wife families with husband of age $ i$ in zone $ k$. While this might appear simple, it is often not possible with the available data. A margin $ \mathbf{N}^P_{g+i+k}$ giving the $ \textsc{Sexp}\times
\textsc{Agep}\times \textsc{Ctcode}$ distribution is probably available to apply to the person population. However, a similar margin for just married males is not likely to exist for the family population; instead, the age breakdown for married males in the family usually comes from the PUMS alone. As a result, equation (4.6) is not satisfied.

One suggestion immediately leaps to mind: if the person population is fitted with IPF first and $ \mathbf{\hat{N}}^P$ is known, the slice of $ \hat{N}^P_{ghi+k}$ where $ g=\textit{male}$ and $ h=\textit{husband-wife}$ could be applied as a margin to the family fitting procedure, and likewise for $ g=\textit{female}$. This is entirely feasible, and does indeed guarantee matching totals between the populations. The approach can be used for the full set of attributes shared between the individual and family populations. There is one downside, however: it can only be performed in one direction. The family table can be fitted to the person table or vice versa, but they cannot be fitted simultaneously.4

Finally, there remains one wrinkle: it is possible that the family population will still not be able to fit the total margin from the individual population, due to a different sparsity pattern. For example, if the family PUMS includes no families where the male is (say) 15-19 years old but the individual PUMS does include a married male of that age, then the fit cannot be achieved. This is rarely an issue when a small number of attributes are shared, but when a large number of attributes are shared between the two populations it is readily observed. The simplest solution is to minimize the number of shared attributes, or to use a coarse categorization for the purposes of linking the two sets of attributes.

Alternatively, the two PUMS could be cross-classified using the shared attributes and forced to agree. For example, for $ g=\textit{male}$ and $ h=\textit{husband-wife}$, then the pattern of zeros in $ \mathbf{n}^P_{ghi++}$ and $ \mathbf{n}^F_{i++}$ could be forced to agree by setting cells to zero in one or both tables. (In the earlier example, this would remove the married male of age 15-19 from the Person PUMS.) The person population is then fitted using this modified PUMS, and the family population is then fitted to the margin of the person population.

2 Conditioned Monte Carlo

The second problem with IPF-based synthesis stems from the independent Monte Carlo draws used to synthesize persons and families. For example, suppose that mutually fitted tables $ \mathbf{\hat{N}}^P$ and $ \mathbf{\hat{N}}^F$ are used with Monte Carlo to produce a complete population of persons and families $ \mathbf{\hat{N'}}^P \in \mathbb{Z}$ and $ \mathbf{\hat{N'}}^F \in \mathbb{Z}$. If it can be guaranteed for $ g=\textit{male}$ and $ h=\textit{husband-wife}$ that

$\displaystyle \mathbf{\hat{N'}}^P_{ghi+k}$ $\displaystyle = \mathbf{\hat{N'}}^F_{i+k}$ (26)

(and likewise for $ g=\textit{female}$), then a perfectly consistent set of connections between persons and families is possible. How can equation (4.7) be satisfied?

A simplistic solution would be a stratified sampling scheme: for each combination of $ i$ and $ k$, select a number of individuals to synthesize and make exactly that many draws from the subtables $ \mathbf{\hat{N}}_{++i+k}^P$ and $ \mathbf{\hat{N}}_{i+k}^F$. This approach breaks down when the number of strata grows large, as it inevitably does when more than one attribute is shared between persons and families.

The problem becomes clearer once the reason for mismatches is recognized. Suppose a Monte Carlo draw selects a family with husband age $ i$ in zone $ k$. This random draw is not synchronized with the draws from the person population, requiring a person of age $ i$ in zone $ k$ to be drawn; the two draws are independent. Instead, synchronization could be achieved by conditioning the person population draws on the family population draws. Instead of selecting a random value from the joint distribution

$\displaystyle P(\textsc{Sexp}, \textsc{Cfstruc}, \textsc{Agep}, \textsc{Hlosp}, \textsc{Ctcode})$

of the person population, a draw from the conditional distribution

$\displaystyle P(\textsc{Hlosp}\,\vert\,\textsc{Sexp}=\textit{male}, \textsc{Cfstruc}=\textit{husband-wife},
\textsc{Agep}=i, \textsc{Ctcode}=k)$

could be used, and a similar draw for the wife. Converting the joint distribution generated by IPF to a conditional distribution is an extremely easy operation.

Figure 4.2: A top-down algorithm for synthesizing persons and husband-wife families.
% latex2html id marker 3311
[p]\For{$1 \ldots N^F$}{
...n algorithm for synthesizing persons and husband-wife

Figure 4.3: A bottom-up algorithm for synthesizing persons and husband-wife families.
% latex2html id marker 3320
[p]\For{$1 \ldots N^P$}{
...p algorithm for synthesizing persons and husband-wife

This reversal of the problem guarantees that equation (4.7) is satisfied, and allows consistent relationships to be built between agents. While it has been described here in a top-down manner (from family to person), it can be applied in either direction. The two approaches are contrasted in Figures 4.2 and 4.3.

3 Summary

As demonstrated in the preceding sections, it is possible to synthesize persons and relate them together to form families, while still guaranteeing that the resulting populations of persons and families approximately satisfy the fitted tables $ \mathbf{\hat{N}}^P$ and $ \mathbf{\hat{N}}^F$. By carefully choosing a set of shared attributes between the person and family agents and using conditional synthesis, a limited number of constraints can be applied to the relationship formation process. In the example discussed earlier, the ages of husband/wife were constrained; in a more realistic example, the labour force activity of husband/wife, the number of children and the ages of children might also be constrained. Furthermore, multiple levels of agent aggregation could be defined: families and persons could be further grouped into households and attached to dwelling units.

The synthesis order for the different levels of aggregation can be varied as required, using either a top-down or bottom-up approach. However, the method is still limited in the types of relationships it can synthesize: it can only represent nesting relationships. Each individual person can only belong to one family, which belongs to one household. Other types of relationships cannot be synthesized using this method, such as a person's membership in another group (e.g., a job with an employer).