maanantai 18. lokakuuta 1999

Show some class

comp.object: Modularity by data abstraction

29.9.1999 9:00

So now you admit that the design objects can go on their merry "unreal" way, so long as they are traceable to the "real world". Excellent. We have made progress at last.

I think that the "real world" is always our own arbitrary mental map ("design") of the surrounding world and data abstractions in OO reflect the way human brain organizes information. Objects are only some discrete entities or concepts with some interpretation attached to them. Because most of our concepts are some way connected to the physical world and they exist in time, we need state information in most of our objects. Also we can think that interactive software connects some computer objects with real world "objects" (users, peripherals). We want to organize the digital world the same way we do the real world.

I won't disagree with you here. I will emphasize the "start with". Your real-world key domain entities help us get *started*. Finishing is another matter. That has been my point all along.

I think that there isn't any human conceivable algorithm for converting a software specification to a modular design ( I would concider that algorithm to be intelligent). Of course it is possible to develop rules and heuristics but in the end its a question of "understanding" (having efficient model of the problem domain, that is design) the problem domain. This applies to all programming paradigms but because of OO we can map our previous understanding more directly to programming concepts. This is why OO gurus start with real world entities: they are familiar and we already have some "mental designs" for them. 


comp.object: Is a Design good?
 
13.10.1999 9:00

Robert C. Martin wrote:
2. The analysis and design are not the "hard" problems.  The hard problems are *proving* that the analysis and design work, by coding it; and refitting the analysis and design when the inevitable problems happen.  The "hard" problem is to *preserve* the design as the code and environment put pressure upon it.

I think that coding isn't efficient way of proving that the analysis and design work because of the low abstraction level. Of course this doesn't mean that algorithms are allowed to be slow. Often an experienced programmer can instantly "see" that an implementation behind an interface doesn't affect design and it is possible to implement efficiently.

At least we should aim for software development where we need few very good designers and lots of coding power so that we can transfer the knowledge of the few to the whole system even if it is a large project. 


18.10.1999 9:00

Michael C. Feathers wrote:
The interesting thing that I see about modeling is that it is not much different from coding except that you leave out the meat.  In other words, if you say that a method will do something it is taken on faith.  Once you add a complete specification of the method's behavior, you might as well have written the code. Models often aren't at a truly different level of abstraction than code.  If they were, the models would deal primarily with abstractions larger than classes.  The way modeling is done today, classes in models often correspond directly to classes in code. You identify your methods and fill in their bodies later, as if you were coloring in a coloring book.

Classes in models correspond directly to classes in code because classes in code are the general programming mechanism in most OO-languages. That doesn't mean that classes could not use other classes abstractly. If you think an OO-program having only one abstraction level - the code - I think you are missing the point of OO.

The sad fact is, that the internals of methods are paramount in design.  Without them, you have no way of demonstrating correctness or evaluating performance.  You start specifying things in models without any correctness feedback. It is too easy to believe it works rather than know that it does.

It is always a matter of belief but the point is to what extent.

IMHO you are simplifying too much. I think that the main reason we encapsulate methods and data in OO is that we want to create new concepts, that is to construct more abstract entities compared to the basic programming structures (in a way C++ is already a more "abstract" Turing machine). Of course we can apply this recursively. By abstract I mean here that we can understand the object more easily from some other context with some default interpretations attached to it (its methods). It is also "easier" to have faith on C++'s while-loops than some assembler code.

It is very important to hide the implementation in modelling. For example, if we are developing a banking system and want to model an user, the power of abstaction comes from the fact that we don't need to think the colour of the user's eyes. If we have 3 organizational hierarchies in our design (A,B and C), usually A uses B's interpretations and B C's but A doesn't need to understand anything about C. So, if we demand that the whole program should be coded just to prove that A's relationship with B works, then we need to code also C. At the top of this abstraction hierarchy is the end-user and her context.

My view of the OO is just that we build a ladder of abstraction levels or contexts until it matches with end-user's reality. But when we expect the end-user to provide us with the specification of the program (the whole program is after all one huge object in itself) we don't demand that she could program it by herself. In many large projects it is important to create many layers of abstaction to reveal the hidden structures of the real world if we want to extend or modify our system later. Very often our end-user isn't aware of these "hidden" mid-level abstraction levels. That is because the layers are more general than her own context (for example in banking system the user of ATM system doesn't need to know about money transfers between banks, even if banks and users have some common properties) or the abstractions are so low-level that they interest only programmers (for example the need to code a modular implementation of a data structure that the programming environment doesn't support very efficiently). When we build these new mid-level contexts, we are analysing and designing. It is important to understand, that it is NOT necessary to code the lower contexts entirely that we can say it doesn't make the system slower by an order of magnitude.

I once told someone that if there is a person on the team that you don't trust enough to design, then one of you has to go. Everyone who can understand the design can participate in the design, otherwise they are being wasted.  The design has to be simple enough for everyone to understand, and everyone has to be smart enough to evolve the design.

Again, there isn't just one level of design. I can trust some hacker to design a fast data structure or algorith for me, but I wouldn't let her to design the user interface or the overall structure of the system. It may even be waste of resources to teach OO-method to everyone in the project.


("Weird Al" Yankovic - It's All About The Pentiums)

Ei kommentteja:

Lähetä kommentti