Back in the '80s, (yes, I am that old), people used to model software and systems with data flow diagrams. Among the problems with the commonly used approaches to modeling was the notion of "7+/-2" diagram elements per diagram. In order to comply with the goal deeply nested decomposition hierarchies were created, sometimes dozens of levels deep all to ensure that each diagram only had a tiny number of elements on it. The resulting models were virtually impossible to navigate and practically impossible to actually use. Why did people even try?
The reason is that someone, whose identify is lost, decided to apply the results of neuroligusitic research - notably the 1956 paper "The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information" (by George A. Miller, published in Psychological Review) to the application of visual models. The basic outcome of the actual paper is a discussion about the correlation between the limits of of short term memory and out capacity to reason about things. Specifically, if a person is presented with a data set that vary along one dimension (e.g. tones varying in pitch), and asked to perform a value-specific responses, performance drops abruptly around 7 different stimuli discriminates. This correlates, according to Miller, with memory span - the longest list of items that a person can recall correctly 50% or more of the time. Interestingly, Miller recognized that the correlation is actually coincidental, a point ignored by many subsequent readers of the paper.
So there is nothing "magic" about the magic number of 7 +/- 2, and even if there was, Miller's research doesn't apply to visual modeling. In the case of visual modeling, memory span and one-dimensional absolute judgement is irrelevant because the subject elements are right in front of you all the time and don't have to be recalled in order to be used in reasoning. Further, adherence to this rule in visual modeling leads to arbitrarily decomposing collections of directly related elements into distributed aggregated decomposition hierarchies. This means you are converting one-dimensional absolute judgements into multi-dimensional absolute judgements - that is, you are increasing the conceptual distance between directly related elements by introducing unnecessary and arbitrary separation. This makes comprehension of the relations among those elements far more difficult than it would be if the direct relationships were maintained.
So what's the alternative? The Harmony(r) process uses the notion of a "diagram mission statement" - a singular concept or purpose visualized by the diagram. The primary reason that many models are so difficult to navigate and understand is that either the diagrams have no coherent intent or that they have too many. If we create a set of class diagrams each with a specific mission - such as show the collaboration realizing a use case, show a generalization taxonomy, show the contents of a package, show an architectural viewpoint, etc - each diagram becomes a clear statement of that mission. This means that class diagrams are built up around interesting aspects of the model so that the stakeholders can address specific questions. If you have a new question, you can build up a viewpoint (i.e. diagram) around that question. As an aside, I usually explicitly state the mission of the diagram in a comment in a corner of the diagram. For example, such a comment might read "The mission of this diagram is to show the collaboration of high-level elements realizing the 'Track Tactical Objects' Use Case." And the diagram then shows all of the elements that contribute to that mission, even if they number 30 or more.
Another consequence is that the same class will likely show up on many different diagrams. That's ok, because as long as you are using a modeling tool (as opposed to a drawing tool), the model repository maintains all the views in sync. A modeling tool, such as Rational Rhapsody(tm), manages a semantic repository of information that dynamically links the diagrams and their elements to the underlying semantic basis of the model. If the repository is changed - such as happens when an element on a diagram is modified - then all the relevant diagrams are likewise changed. This is because the tooling ensures the elements on the diagrams are dynamically linked to that repository. Similarly, in high fidelity modeling, the source code is simply another view of the semantic repository and is likewise linked dyanmically to it. But that's another hint, for another day.