Re: [HACKERS] Current sources?
От | dg@illustra.com (David Gould) |
---|---|
Тема | Re: [HACKERS] Current sources? |
Дата | |
Msg-id | 9805260533.AA16075@hawk.illustra.com обсуждение исходный текст |
Ответ на | Re: [HACKERS] Current sources? (t-ishii@sra.co.jp) |
Ответы |
Re: [HACKERS] Current sources?
|
Список | pgsql-hackers |
> > >> I do not believe that this could ever have passed regression. Do we have > >> the whole patch to back out, or do we need to just "fix what we have now"? > >> > >> Also, perhaps we need to be more selective about checkins? > > > >Not sure. Marc and I reviewed it, and it looked very good. In fact, I > >would like to see more of such patches, of course, without the destroydb > >problem, but many patches have little quirks the author could not have > >anticipated. > > > >> { > >> JunkFilter *j = (JunkFilter *) ExecInitJunkFilter(targetList); > >> estate->es_junkFilter = j; > >> >>>> tupType = j->jf_cleanTupType; /* Added by daveh@insightdist.com 5/20/98 */ > >> } > >> else > >> estate->es_junkFilter = NULL; > >> > >> Here is my debug transcript for "drop database regression" > > > >Here is the original patch. I got it with the command: > > I have just removed the patch using patch -R and confirmed that "drop > table", and "delete from" works again. regression tests also look > good, except char/varchar/strings. > > Now I can start to create patches for snapshot... Maybe I should elaborate a bit on what I meant by "more selective about checkins". First, I agree that we should see more patches like this. Patches in the parser, optimiser, planner, etc are hard. It takes a fair amount of effort to understand this stuff even to the point of being able to attempt a patch in this area. So I applaud anyone who makes that kind of investment and wish that they continue their efforts. Second, patches in the parser, optimiser, planner, etc are hard. It is incredibly easy to do something that works in a given case but creates a problem for some other statement. And these problems can be very obscure and hard to debug, this one was relatively easy. So, how do we balance the goals of "encouraging people to make the effort to do a hard patch" and "keeping the codeline stable enough to work from"? Where I work we have had good success with the following: - every night a from scratch build and regression test is run. - if the build and regression is good, then a snapshot is made into a "last known good" location. This lets someone find a good "recent" source tree even if there is a problem that doesn't get solved for a few days. - a report is generated that lists the results of the build and regression, AND, a list of the checkins since the "last known good" snapshot. This lets someone who just submitted a patch see: a) the patch was applied, b) whether it created any problems. It also helps identify conflicting patches etc. I believe most people submitting patches want them to work, and will be very good about monitoring the results of their submissions and fixing any problems, IF the results are visible. Right now they aren't really. The other tool I believe to be very effective in improving code quality is code review. My experience is that review is both more effective and cheaper than testing in finding problems. To that end, I suggest we create a group of volunteer reviewers, possibly with their own mailing list. The idea is not to impose a bureaucratic barrier to submitting patches, but rather to allow people who have an idea to get some assistance on whether a given change will fit in and work well. I see some people on this list using the list for this purpose now, I merely propose to normalise this so that everyone knows that this resource is available to them, and given an actual patch (rather than mere discussion) to be able to identify specific persons to do a review. I don't have strong preferences for the form of this, so ideas are welcome. My initial concept supposes a group of maybe 5 to 10 people with some experience in the code who would agree to review any patches within say two days of submission and respond directly to the submitter. Perhaps somehow the mailing list could be contrived to randomly pick (or allow reviewers to pick) so that say two reviewers had a look at each patch. Also, I think it is important to identify any reviewers in the changelog or checkin comments so that if there is a problem and the author is unavailable, there are at least the reviewers with knowledge of what the patch was about. I would be happy to volunteer for a term on the ("possible proposed mythical") review team. I would be even happier to know that next time I had a tricky patch that some other heads would take the time to help me see what I had overlooked. Comments? -dg David Gould dg@illustra.com 510.628.3783 or 510.305.9468 Informix Software (No, really) 300 Lakeside Drive Oakland, CA 94612 "you can no more argue against quality than you can argue against world peace or free video games." -- p.j. plauger
В списке pgsql-hackers по дате отправления: