prerequisites
Using java-allocation-instrumenter count how many objects logger produces on a simple log message likelog.info("value");
and in case of log4j message like: log.info("value: " + v);
andlog.info("value:" + i + " " + i + " " + i + " " + i + " " + i
+ i + " " + i + " " + i + " " + i + " " + i);
and for logback: log.info("value: {}", v);
and log.info("value:{} {} {} {} {} {} {} {} {} {} ",
new long[]{i, i, i, i, i, i, i, i, i, i});
where v is long value.Number of allocated objects by loggers / single message.
Memory consumption of objects produced by loggers (in bytes) / single message.
log4j-string count:1014 size:42496
log4j-long count:1537 size:58824
log4j-longs count:1511 size:76448
logback-string count:2843 size:150336
logback-long count:3982 size:196296
logback-longs count:3987 size:268464
log4j-long count:1537 size:58824
log4j-longs count:1511 size:76448
logback-string count:2843 size:150336
logback-long count:3982 size:196296
logback-longs count:3987 size:268464
details per 100 messages
other sources of garbage
It worth to know on which thread log message was produced by, the obvious answer is Thread.currentThread().getName(). But due to history reasons and some mystical design things this method produces two new object on each call:It's an array of char[4]
just allocated the string 'main'
Thread.currentThread().getName() allocated:2 size:48
just allocated the string 'main'
Thread.currentThread().getName() allocated:2 size:48
As well it's quite clear that methods like Integer.toString, Long.toString and so on, produces garbage:
It's an array of char[5][main]
just allocated the string '12345'
Integer.toString allocated:2 size:48
just allocated the string '12345'
Integer.toString allocated:2 size:48
Another more interesting fact that condition.await (from ReentrantLock) produces garbage as well:
just allocated the object java.util.concurrent.locks.AbstractQueuedSynchronizer$Node@19050a0 of type java/util/concurrent/locks/AbstractQueuedSynchronizer$Node whose size is 32
ReentrantLock allocated:1 size:32
ReentrantLock allocated:1 size:32
garbage free logger
The goal is to create logger for low latency (latency critical) applications (to be precise for latency critical execution path) which will affect application explicit and implicit (though gc pauses) as less as it possible. It means that it will not be a general purpose logger. As well it means that this logger should not be a log4j or logback (or whatever you like) killer - that's ok to combine them using e.g. log4j for the initial application phase and this logger for latency critical execution path.Combine previous observations and follow ideas together:
- do not discard any object to eliminate garbage collector load completely
- use reusable entries, so it worth to preallocate necessary items
- higher throughput in comparison with competitors like log4j/logback
- support functionality of Integer.toString, Long.toString, DateFormatter.format etc to render numbers, dates to stream
- single i/o thread
- use off-heap buffer to provide fast i/o and less gc load
gflogger design
limitation
gflogger DOES NOT have :- garbage overhead on runtime but initialization & file rolling phases to be honest
- unlimited message length
- basic xml/properties configurator ( configure by yourself via spring etc )
- ability to use appenders in different categories hierarchy
- ability to track changes of :
- thread name
- time zone and locale
- different types of appenders as jms appender, email appender and so on
api
public interface LogEntry {
LogEntry append(char c);
LogEntry append(CharSequence csq);
LogEntry append(CharSequence csq, int start, int end);
LogEntry append(boolean b);
LogEntry append(byte i);
LogEntry append(short i);
LogEntry append(int i);
LogEntry append(long i);
LogEntry append(double i, int precision);
void commit();
}
example
// import gflogger.*;
// import gflogger.appender.*;
// import gflogger.base.*;
final FileAppenderFactory fileAppender =
new FileAppenderFactory();
fileAppender.setLogLevel(LogLevel.INFO);
fileAppender.setFileName("logs/file.log");
fileAppender.setAppend(false);
fileAppender.setImmediateFlush(false);
fileAppender.setLayoutPattern("%d{HH:mm:ss,SSS zzz} %p %m [%c{2}] [%t]%n");
// 1024 entries for 4k message max
final LoggerService service =
new DefaultLoggerServiceImpl(1 << 10, 1 << 12, fileAppender);
LogFactory.init(Collections.singletonMap("com.me", service));
private static final gflogger.Logger logger =
gflogger.LogFactory.getLog(MyClass.class);
// in method
logger.info().append("value:").append(value).commit();
gflogger variants
- gflogger is logger on the top simplified disruptor snapshot - it means there is no external dependency on disruptor.jar
log.info().append("v:").append(v).commit()
- dgflogger is logger on the top of pure disruptor, depends on disruptor.jar
log.info().append("v:").append(v).commit()
- *logger+d = *logger with enabled off-heap buffers
- *logger-d = *logger with disabled off-heap buffers
- gflog4j is gflogger api wrapper over log4j impl
is similar tolog.info().append("v:").append(v).commit();
log4j.info(msg);
- jcl-gflogger is commons-logging api wrapper over gflogger impl
is similar tolog.info("value:" + v);
gflogger.info().append(msg).commit();
benchmark
- all loggers configured in similar way: async mode (bypass for logback), buffered i/o
- messages are like string + long in terms of logger approach
- log message pattern like
%d{HH:mm:ss,SSS zzz} %p %m [%c{2}] [%t]%n
- appenders: the only one file appender
- warm-up phase is 5 * 10000 runs, System.gc() is invoked explicitly after that and wait for 2 seconds.
- buffer size: 1024 messages
- allocation instrumentation is not used for the follow benchmark tests
- benchmark platform: 16 core intel box, solaris, java6u23
- gflogger build.20120115
- log4j 1.2.15
- logback 0.9.29
- disruptor 2.7
Source code published under Apache 2.0 License: gflogger @ bitbucket
P.S. That's my topic on Java One 2012 Moscow
Комментариев нет:
Отправить комментарий