[performance] rewrite timelines to rely on new timeline cache type (#3941)

* start work rewriting timeline cache type

* further work rewriting timeline caching

* more work integration new timeline code

* remove old code

* add local timeline, fix up merge conflicts

* remove old use of go-bytes

* implement new timeline code into more areas of codebase, pull in latest go-mangler, go-mutexes, go-structr

* remove old timeline package, add local timeline cache

* remove references to old timeline types that needed starting up in tests

* start adding page validation

* fix test-identified timeline cache package issues

* fix up more tests, fix missing required changes, etc

* add exclusion for test.out in gitignore

* clarify some things better in code comments

* tweak cache size limits

* fix list timeline cache fetching

* further list timeline fixes

* linter, ssssssssshhhhhhhhhhhh please

* fix linter hints

* reslice the output if it's beyond length of 'lim'

* remove old timeline initialization code, bump go-structr to v0.9.4

* continued from previous commit

* improved code comments

* don't allow multiple entries for BoostOfID values to prevent repeated boosts of same boosts

* finish writing more code comments

* some variable renaming, for ease of following

* change the way we update lo,hi paging values during timeline load

* improved code comments for updated / returned lo , hi paging values

* finish writing code comments for the StatusTimeline{} type itself

* fill in more code comments

* update go-structr version to latest with changed timeline unique indexing logic

* have a local and public timeline *per user*

* rewrite calls to public / local timeline calls

* remove the zero length check, as lo, hi values might still be set

* simplify timeline cache loading, fix lo/hi returns, fix timeline invalidation side-effects missing for some federated actions

* swap the lo, hi values 🤦

* add (now) missing slice reverse of tag timeline statuses when paging ASC

* remove local / public caches (is out of scope for this work), share more timeline code

* remove unnecessary change

* again, remove more unused code

* remove unused function to appease the linter

* move boost checking to prepare function

* fix use of timeline.lastOrder, fix incorrect range functions used

* remove comments for repeat code

* remove the boost logic from prepare function

* do a maximum of 5 loads, not 10

* add repeat boost filtering logic, update go-structr, general improvements

* more code comments

* add important note

* fix timeline tests now that timelines are returned in page order

* remove unused field

* add StatusTimeline{} tests

* add more status timeline tests

* start adding preloading support

* ensure repeat boosts are marked in preloaded entries

* share a bunch of the database load code in timeline cache, don't clear timelines on relationship change

* add logic to allow dynamic clear / preloading of timelines

* comment-out unused functions, but leave in place as we might end-up using them

* fix timeline preload state check

* much improved status timeline code comments

* more code comments, don't bother inserting statuses if timeline not preloaded

* shift around some logic to make sure things aren't accidentally left set

* finish writing code comments

* remove trim-after-insert behaviour

* fix-up some comments referring to old logic

* remove unsetting of lo, hi

* fix preload repeatBoost checking logic

* don't return on status filter errors, these are usually transient

* better concurrency safety in Clear() and Done()

* fix test broken due to addition of preloader

* fix repeatBoost logic that doesn't account for already-hidden repeatBoosts

* ensure edit submodels are dropped on cache insertion

* update code-comment to expand CAS accronym

* use a plus1hULID() instead of 24h

* remove unused functions

* add note that public / local timeline requester can be nil

* fix incorrect visibility filtering of tag timeline statuses

* ensure we filter home timeline statuses on local only

* some small re-orderings to confirm query params in correct places

* fix the local only home timeline filter func
This commit is contained in:
kim 2025-04-26 09:56:15 +00:00 committed by GitHub
commit 6a6a499333
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
108 changed files with 2935 additions and 5213 deletions

View file

@ -1,9 +0,0 @@
MIT License
Copyright (c) 2021 gruf
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View file

@ -1,12 +0,0 @@
drop-in replacement for standard "bytes" library
contains alternative Buffer implementation that provides direct access to the
underlying byte-slice, with some interesting alternative struct methods. provides
no safety guards, if you pass bad values it will blow up in your face...
and alternative `ToUpper()` and `ToLower()` implementations that use lookup
tables for improved performance
provides direct call-throughs to most of the "bytes" library functions to facilitate
this being a direct drop-in. in some time, i may offer alternative implementations
for other functions too

View file

@ -1,131 +0,0 @@
package bytes
import (
"unicode/utf8"
)
// Buffer is a very simple buffer implementation that allows
// access to and reslicing of the underlying byte slice.
type Buffer struct {
B []byte
}
func NewBuffer(b []byte) Buffer {
return Buffer{
B: b,
}
}
func (b *Buffer) Write(p []byte) (int, error) {
b.Grow(len(p))
return copy(b.B[b.Len()-len(p):], p), nil
}
func (b *Buffer) WriteString(s string) (int, error) {
b.Grow(len(s))
return copy(b.B[b.Len()-len(s):], s), nil
}
func (b *Buffer) WriteByte(c byte) error {
l := b.Len()
b.Grow(1)
b.B[l] = c
return nil
}
func (b *Buffer) WriteRune(r rune) (int, error) {
if r < utf8.RuneSelf {
b.WriteByte(byte(r))
return 1, nil
}
l := b.Len()
b.Grow(utf8.UTFMax)
n := utf8.EncodeRune(b.B[l:b.Len()], r)
b.B = b.B[:l+n]
return n, nil
}
func (b *Buffer) WriteAt(p []byte, start int64) (int, error) {
b.Grow(len(p) - int(int64(b.Len())-start))
return copy(b.B[start:], p), nil
}
func (b *Buffer) WriteStringAt(s string, start int64) (int, error) {
b.Grow(len(s) - int(int64(b.Len())-start))
return copy(b.B[start:], s), nil
}
func (b *Buffer) Truncate(size int) {
b.B = b.B[:b.Len()-size]
}
func (b *Buffer) ShiftByte(index int) {
copy(b.B[index:], b.B[index+1:])
}
func (b *Buffer) Shift(start int64, size int) {
copy(b.B[start:], b.B[start+int64(size):])
}
func (b *Buffer) DeleteByte(index int) {
b.ShiftByte(index)
b.Truncate(1)
}
func (b *Buffer) Delete(start int64, size int) {
b.Shift(start, size)
b.Truncate(size)
}
func (b *Buffer) InsertByte(index int64, c byte) {
l := b.Len()
b.Grow(1)
copy(b.B[index+1:], b.B[index:l])
b.B[index] = c
}
func (b *Buffer) Insert(index int64, p []byte) {
l := b.Len()
b.Grow(len(p))
copy(b.B[index+int64(len(p)):], b.B[index:l])
copy(b.B[index:], p)
}
func (b *Buffer) Bytes() []byte {
return b.B
}
func (b *Buffer) String() string {
return string(b.B)
}
func (b *Buffer) StringPtr() string {
return BytesToString(b.B)
}
func (b *Buffer) Cap() int {
return cap(b.B)
}
func (b *Buffer) Len() int {
return len(b.B)
}
func (b *Buffer) Reset() {
b.B = b.B[:0]
}
func (b *Buffer) Grow(size int) {
b.Guarantee(size)
b.B = b.B[:b.Len()+size]
}
func (b *Buffer) Guarantee(size int) {
if size > b.Cap()-b.Len() {
nb := make([]byte, 2*b.Cap()+size)
copy(nb, b.B)
b.B = nb[:b.Len()]
}
}

View file

@ -1,261 +0,0 @@
package bytes
import (
"bytes"
"reflect"
"unsafe"
)
var (
_ Bytes = &Buffer{}
_ Bytes = bytesType{}
)
// Bytes defines a standard way of retrieving the content of a
// byte buffer of some-kind.
type Bytes interface {
// Bytes returns the byte slice content
Bytes() []byte
// String returns byte slice cast directly to string, this
// will cause an allocation but comes with the safety of
// being an immutable Go string
String() string
// StringPtr returns byte slice cast to string via the unsafe
// package. This comes with the same caveats of accessing via
// .Bytes() in that the content is liable change and is NOT
// immutable, despite being a string type
StringPtr() string
}
type bytesType []byte
func (b bytesType) Bytes() []byte {
return b
}
func (b bytesType) String() string {
return string(b)
}
func (b bytesType) StringPtr() string {
return BytesToString(b)
}
// ToBytes casts the provided byte slice as the simplest possible
// Bytes interface implementation
func ToBytes(b []byte) Bytes {
return bytesType(b)
}
// Copy returns a new copy of slice b, does NOT maintain nil values
func Copy(b []byte) []byte {
p := make([]byte, len(b))
copy(p, b)
return p
}
// BytesToString returns byte slice cast to string via the "unsafe" package
func BytesToString(b []byte) string {
return *(*string)(unsafe.Pointer(&b))
}
// StringToBytes returns the string cast to string via the "unsafe" and "reflect" packages
func StringToBytes(s string) []byte {
// thank you to https://github.com/valyala/fasthttp/blob/master/bytesconv.go
var b []byte
// Get byte + string headers
bh := (*reflect.SliceHeader)(unsafe.Pointer(&b))
sh := (*reflect.StringHeader)(unsafe.Pointer(&s))
// Manually set bytes to string
bh.Data = sh.Data
bh.Len = sh.Len
bh.Cap = sh.Len
return b
}
// // InsertByte inserts the supplied byte into the slice at provided position
// func InsertByte(b []byte, at int, c byte) []byte {
// return append(append(b[:at], c), b[at:]...)
// }
// // Insert inserts the supplied byte slice into the slice at provided position
// func Insert(b []byte, at int, s []byte) []byte {
// return append(append(b[:at], s...), b[at:]...)
// }
// ToUpper offers a faster ToUpper implementation using a lookup table
func ToUpper(b []byte) {
for i := 0; i < len(b); i++ {
c := &b[i]
*c = toUpperTable[*c]
}
}
// ToLower offers a faster ToLower implementation using a lookup table
func ToLower(b []byte) {
for i := 0; i < len(b); i++ {
c := &b[i]
*c = toLowerTable[*c]
}
}
// HasBytePrefix returns whether b has the provided byte prefix
func HasBytePrefix(b []byte, c byte) bool {
return (len(b) > 0) && (b[0] == c)
}
// HasByteSuffix returns whether b has the provided byte suffix
func HasByteSuffix(b []byte, c byte) bool {
return (len(b) > 0) && (b[len(b)-1] == c)
}
// HasBytePrefix returns b without the provided leading byte
func TrimBytePrefix(b []byte, c byte) []byte {
if HasBytePrefix(b, c) {
return b[1:]
}
return b
}
// TrimByteSuffix returns b without the provided trailing byte
func TrimByteSuffix(b []byte, c byte) []byte {
if HasByteSuffix(b, c) {
return b[:len(b)-1]
}
return b
}
// Compare is a direct call-through to standard library bytes.Compare()
func Compare(b, s []byte) int {
return bytes.Compare(b, s)
}
// Contains is a direct call-through to standard library bytes.Contains()
func Contains(b, s []byte) bool {
return bytes.Contains(b, s)
}
// TrimPrefix is a direct call-through to standard library bytes.TrimPrefix()
func TrimPrefix(b, s []byte) []byte {
return bytes.TrimPrefix(b, s)
}
// TrimSuffix is a direct call-through to standard library bytes.TrimSuffix()
func TrimSuffix(b, s []byte) []byte {
return bytes.TrimSuffix(b, s)
}
// Equal is a direct call-through to standard library bytes.Equal()
func Equal(b, s []byte) bool {
return bytes.Equal(b, s)
}
// EqualFold is a direct call-through to standard library bytes.EqualFold()
func EqualFold(b, s []byte) bool {
return bytes.EqualFold(b, s)
}
// Fields is a direct call-through to standard library bytes.Fields()
func Fields(b []byte) [][]byte {
return bytes.Fields(b)
}
// FieldsFunc is a direct call-through to standard library bytes.FieldsFunc()
func FieldsFunc(b []byte, fn func(rune) bool) [][]byte {
return bytes.FieldsFunc(b, fn)
}
// HasPrefix is a direct call-through to standard library bytes.HasPrefix()
func HasPrefix(b, s []byte) bool {
return bytes.HasPrefix(b, s)
}
// HasSuffix is a direct call-through to standard library bytes.HasSuffix()
func HasSuffix(b, s []byte) bool {
return bytes.HasSuffix(b, s)
}
// Index is a direct call-through to standard library bytes.Index()
func Index(b, s []byte) int {
return bytes.Index(b, s)
}
// IndexByte is a direct call-through to standard library bytes.IndexByte()
func IndexByte(b []byte, c byte) int {
return bytes.IndexByte(b, c)
}
// IndexAny is a direct call-through to standard library bytes.IndexAny()
func IndexAny(b []byte, s string) int {
return bytes.IndexAny(b, s)
}
// IndexRune is a direct call-through to standard library bytes.IndexRune()
func IndexRune(b []byte, r rune) int {
return bytes.IndexRune(b, r)
}
// IndexFunc is a direct call-through to standard library bytes.IndexFunc()
func IndexFunc(b []byte, fn func(rune) bool) int {
return bytes.IndexFunc(b, fn)
}
// LastIndex is a direct call-through to standard library bytes.LastIndex()
func LastIndex(b, s []byte) int {
return bytes.LastIndex(b, s)
}
// LastIndexByte is a direct call-through to standard library bytes.LastIndexByte()
func LastIndexByte(b []byte, c byte) int {
return bytes.LastIndexByte(b, c)
}
// LastIndexAny is a direct call-through to standard library bytes.LastIndexAny()
func LastIndexAny(b []byte, s string) int {
return bytes.LastIndexAny(b, s)
}
// LastIndexFunc is a direct call-through to standard library bytes.LastIndexFunc()
func LastIndexFunc(b []byte, fn func(rune) bool) int {
return bytes.LastIndexFunc(b, fn)
}
// Replace is a direct call-through to standard library bytes.Replace()
func Replace(b, s, r []byte, c int) []byte {
return bytes.Replace(b, s, r, c)
}
// ReplaceAll is a direct call-through to standard library bytes.ReplaceAll()
func ReplaceAll(b, s, r []byte) []byte {
return bytes.ReplaceAll(b, s, r)
}
// Split is a direct call-through to standard library bytes.Split()
func Split(b, s []byte) [][]byte {
return bytes.Split(b, s)
}
// SplitAfter is a direct call-through to standard library bytes.SplitAfter()
func SplitAfter(b, s []byte) [][]byte {
return bytes.SplitAfter(b, s)
}
// SplitN is a direct call-through to standard library bytes.SplitN()
func SplitN(b, s []byte, c int) [][]byte {
return bytes.SplitN(b, s, c)
}
// SplitAfterN is a direct call-through to standard library bytes.SplitAfterN()
func SplitAfterN(b, s []byte, c int) [][]byte {
return bytes.SplitAfterN(b, s, c)
}
// NewReader is a direct call-through to standard library bytes.NewReader()
func NewReader(b []byte) *bytes.Reader {
return bytes.NewReader(b)
}

View file

@ -1,11 +0,0 @@
package bytes
// Code generated by go run bytesconv_table_gen.go; DO NOT EDIT.
// See bytesconv_table_gen.go for more information about these tables.
//
// Source: https://github.com/valyala/fasthttp/blob/master/bytes_table_gen.go
const (
toLowerTable = "\x00\x01\x02\x03\x04\x05\x06\a\b\t\n\v\f\r\x0e\x0f\x10\x11\x12\x13\x14\x15\x16\x17\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f !\"#$%&'()*+,-./0123456789:;<=>?@abcdefghijklmnopqrstuvwxyz[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~\u007f\x80\x81\x82\x83\x84\x85\x86\x87\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f\x90\x91\x92\x93\x94\x95\x96\x97\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7\xa8\xa9\xaa\xab\xac\xad\xae\xaf\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7\xe8\xe9\xea\xeb\xec\xed\xee\xef\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
toUpperTable = "\x00\x01\x02\x03\x04\x05\x06\a\b\t\n\v\f\r\x0e\x0f\x10\x11\x12\x13\x14\x15\x16\x17\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f !\"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`ABCDEFGHIJKLMNOPQRSTUVWXYZ{|}~\u007f\x80\x81\x82\x83\x84\x85\x86\x87\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f\x90\x91\x92\x93\x94\x95\x96\x97\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7\xa8\xa9\xaa\xab\xac\xad\xae\xaf\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7\xe8\xe9\xea\xeb\xec\xed\xee\xef\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
)

View file

@ -1,6 +1,7 @@
package structr
import (
"fmt"
"os"
"reflect"
"strings"
@ -222,10 +223,10 @@ func (i *Index) get(key string, hook func(*indexed_item)) {
func (i *Index) key(buf *byteutil.Buffer, parts []unsafe.Pointer) string {
buf.B = buf.B[:0]
if len(parts) != len(i.fields) {
panicf("incorrect number key parts: want=%d received=%d",
panic(fmt.Sprintf("incorrect number key parts: want=%d received=%d",
len(i.fields),
len(parts),
)
))
}
if !allow_zero(i.flags) {
for x, field := range i.fields {

View file

@ -70,7 +70,7 @@ func find_field(t reflect.Type, names []string) (sfield struct_field) {
name := names[0]
names = names[1:]
if !is_exported(name) {
panicf("field is not exported: %s", name)
panic(fmt.Sprintf("field is not exported: %s", name))
}
return name
}
@ -94,7 +94,7 @@ func find_field(t reflect.Type, names []string) (sfield struct_field) {
// Check for valid struct type.
if t.Kind() != reflect.Struct {
panicf("field %s is not struct (or ptr-to): %s", t, name)
panic(fmt.Sprintf("field %s is not struct (or ptr-to): %s", t, name))
}
var ok bool
@ -102,7 +102,7 @@ func find_field(t reflect.Type, names []string) (sfield struct_field) {
// Look for next field by name.
field, ok = t.FieldByName(name)
if !ok {
panicf("unknown field: %s", name)
panic(fmt.Sprintf("unknown field: %s", name))
}
// Set next offset value.
@ -258,11 +258,6 @@ func eface_data(a any) unsafe.Pointer {
return (*eface)(unsafe.Pointer(&a)).data
}
// panicf provides a panic with string formatting.
func panicf(format string, args ...any) {
panic(fmt.Sprintf(format, args...))
}
// assert can be called to indicated a block
// of code should not be able to be reached,
// it returns a BUG report with callsite.

View file

@ -190,7 +190,8 @@ func (t *Timeline[T, PK]) Select(min, max *PK, length *int, dir Direction) (valu
// Insert will insert the given values into the timeline,
// calling any set invalidate hook on each inserted value.
func (t *Timeline[T, PK]) Insert(values ...T) {
// Returns current list length after performing inserts.
func (t *Timeline[T, PK]) Insert(values ...T) int {
// Acquire lock.
t.mutex.Lock()
@ -269,6 +270,10 @@ func (t *Timeline[T, PK]) Insert(values ...T) {
// Get func ptrs.
invalid := t.invalid
// Get length AFTER
// insert to return.
len := t.list.len
// Done with lock.
t.mutex.Unlock()
@ -279,6 +284,8 @@ func (t *Timeline[T, PK]) Insert(values ...T) {
invalid(value)
}
}
return len
}
// Invalidate invalidates all entries stored in index under given keys.
@ -336,8 +343,8 @@ func (t *Timeline[T, PK]) Invalidate(index *Index, keys ...Key) {
//
// Please note that the entire Timeline{} will be locked for the duration of the range
// operation, i.e. from the beginning of the first yield call until the end of the last.
func (t *Timeline[T, PK]) Range(dir Direction) func(yield func(T) bool) {
return func(yield func(T) bool) {
func (t *Timeline[T, PK]) Range(dir Direction) func(yield func(index int, value T) bool) {
return func(yield func(int, T) bool) {
if t.copy == nil {
panic("not initialized")
} else if yield == nil {
@ -348,7 +355,9 @@ func (t *Timeline[T, PK]) Range(dir Direction) func(yield func(T) bool) {
t.mutex.Lock()
defer t.mutex.Unlock()
var i int
switch dir {
case Asc:
// Iterate through linked list from bottom (i.e. tail).
for prev := t.list.tail; prev != nil; prev = prev.prev {
@ -360,9 +369,12 @@ func (t *Timeline[T, PK]) Range(dir Direction) func(yield func(T) bool) {
value := t.copy(item.data.(T))
// Pass to given function.
if !yield(value) {
if !yield(i, value) {
break
}
// Iter
i++
}
case Desc:
@ -376,9 +388,12 @@ func (t *Timeline[T, PK]) Range(dir Direction) func(yield func(T) bool) {
value := t.copy(item.data.(T))
// Pass to given function.
if !yield(value) {
if !yield(i, value) {
break
}
// Iter
i++
}
}
}
@ -390,8 +405,8 @@ func (t *Timeline[T, PK]) Range(dir Direction) func(yield func(T) bool) {
//
// Please note that the entire Timeline{} will be locked for the duration of the range
// operation, i.e. from the beginning of the first yield call until the end of the last.
func (t *Timeline[T, PK]) RangeUnsafe(dir Direction) func(yield func(T) bool) {
return func(yield func(T) bool) {
func (t *Timeline[T, PK]) RangeUnsafe(dir Direction) func(yield func(index int, value T) bool) {
return func(yield func(int, T) bool) {
if t.copy == nil {
panic("not initialized")
} else if yield == nil {
@ -402,7 +417,9 @@ func (t *Timeline[T, PK]) RangeUnsafe(dir Direction) func(yield func(T) bool) {
t.mutex.Lock()
defer t.mutex.Unlock()
var i int
switch dir {
case Asc:
// Iterate through linked list from bottom (i.e. tail).
for prev := t.list.tail; prev != nil; prev = prev.prev {
@ -411,9 +428,12 @@ func (t *Timeline[T, PK]) RangeUnsafe(dir Direction) func(yield func(T) bool) {
item := (*timeline_item)(prev.data)
// Pass to given function.
if !yield(item.data.(T)) {
if !yield(i, item.data.(T)) {
break
}
// Iter
i++
}
case Desc:
@ -424,9 +444,12 @@ func (t *Timeline[T, PK]) RangeUnsafe(dir Direction) func(yield func(T) bool) {
item := (*timeline_item)(next.data)
// Pass to given function.
if !yield(item.data.(T)) {
if !yield(i, item.data.(T)) {
break
}
// Iter
i++
}
}
}
@ -1033,6 +1056,9 @@ indexing:
// checking for collisions.
if !idx.add(key, i_item) {
// This key already appears
// in this unique index. So
// drop new timeline item.
t.delete(t_item)
free_buffer(buf)
return last

5
vendor/modules.txt vendored
View file

@ -215,9 +215,6 @@ code.superseriousbusiness.org/oauth2/v4/generates
code.superseriousbusiness.org/oauth2/v4/manage
code.superseriousbusiness.org/oauth2/v4/models
code.superseriousbusiness.org/oauth2/v4/server
# codeberg.org/gruf/go-bytes v1.0.2
## explicit; go 1.14
codeberg.org/gruf/go-bytes
# codeberg.org/gruf/go-bytesize v1.0.3
## explicit; go 1.17
codeberg.org/gruf/go-bytesize
@ -280,7 +277,7 @@ codeberg.org/gruf/go-storage/disk
codeberg.org/gruf/go-storage/internal
codeberg.org/gruf/go-storage/memory
codeberg.org/gruf/go-storage/s3
# codeberg.org/gruf/go-structr v0.9.6
# codeberg.org/gruf/go-structr v0.9.7
## explicit; go 1.22
codeberg.org/gruf/go-structr
# github.com/DmitriyVTitov/size v1.5.0