Programming thread -

Shoggoth

kiwifarms.net
What should i do to improve my code skills. Took basic java classes, but what should i do to improve my knowledge.
Pick up a project. Do a bang up job. Read Clean Code, Clean Architecture and A Little Java. Learn about Design Patterns, what they're good for, and when they're a damn cargo cult. Repeat.
 

Attachments

Least Concern

Pretend I have a waifu avatar like everyone else
kiwifarms.net
Pick up a project.
Most importantly, this. Reading about programming is like reading about weightlifting; it'll teach you good technique, but if you want to actually see results, you need to put what you've read into practice.

When you're just starting out, that will probably mean writing boring projects that have been done hundreds of times before and which nobody else will ever use. Don't let it discourage you. Like sudoku, you have to find the joy in the solving of the puzzle itself, regardless of the useless outcome.
 

Shoggoth

kiwifarms.net
Most importantly, this. Reading about programming is like reading about weightlifting; it'll teach you good technique, but if you want to actually see results, you need to put what you've read into practice.

When you're just starting out, that will probably mean writing boring projects that have been done hundreds of times before and which nobody else will ever use. Don't let it discourage you. Like sudoku, you have to find the joy in the solving of the puzzle itself, regardless of the useless outcome.
To add to that, the build-your-own-x github repo is a good place to start
 
  • Winner
Reactions: Yotsubaaa

FuckedUp

Trump's half-Chosen
kiwifarms.net
I have to make an LFSR for a programming assignment. It should be really easy, but the both the PDF and the test file say that "0001101100001100" should step to "00011011000011000". The taps are supposed to be at 13, 12, and 10, and even doing it by hand shows the result should be "00011011000011001". WTF is going on? I already spent a fucking hour on this.
 
  • Thunk-Provoking
Reactions: Yotsubaaa

Yotsubaaa

True & Honest Fan
kiwifarms.net
I have to make an LFSR for a programming assignment. It should be really easy, but the both the PDF and the test file say that "0001101100001100" should step to "00011011000011000". The taps are supposed to be at 13, 12, and 10, and even doing it by hand shows the result should be "00011011000011001". WTF is going on? I already spent a fucking hour on this.
Yeah, I think you're right. It's been a while since I've looked at LFSRs, so unless I'm messing this up in a really embarrassing way, it should be e.g.
Code:
 0  0  0  1  1  0  1  1  0  0  0  0  1  1  0  0
15 14 13 12 11 10  9  8  7  6  5  4  3  2  1  0
Then XORing the shifted bit with the taps,
0 ⊕ 0 ⊕ 1 ⊕ 0 = 1

So the next one is
Code:
 0  0  1  1  0  1  1  0  0  0  0  1  1  0  0  1
15 14 13 12 11 10  9  8  7  6  5  4  3  2  1  0
as you said.

(Unless I've got the labeling the wrong way? Then it would be a zero:
Code:
 0  0  0  1  1  0  1  1  0  0  0  0  1  1  0  0
 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15
0 ⊕ 1 ⊕ 1 ⊕ 0 = 0.)

Maybe the prof/TA just fucked it up? That happens all the time. (It doesn't necessarily mean anything that the error appears in both the spec pdf and the test cases. The test case inputs/outputs would have been copy+pasted straight to the pdf when they compiled the assignment spec; ironically, to avoid errors.) Dunno, perhaps shot off an email to a TA?
 

FuckedUp

Trump's half-Chosen
kiwifarms.net
Yeah, I think you're right. It's been a while since I've looked at LFSRs, so unless I'm messing this up in a really embarrassing way, it should be e.g.
Code:
0  0  0  1  1  0  1  1  0  0  0  0  1  1  0  0
15 14 13 12 11 10  9  8  7  6  5  4  3  2  1  0
Then XORing the shifted bit with the taps,
0 ⊕ 0 ⊕ 1 ⊕ 0 = 1

So the next one is
Code:
0  0  1  1  0  1  1  0  0  0  0  1  1  0  0  1
15 14 13 12 11 10  9  8  7  6  5  4  3  2  1  0
as you said.

(Unless I've got the labeling the wrong way? Then it would be a zero:
Code:
0  0  0  1  1  0  1  1  0  0  0  0  1  1  0  0
0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15
0 ⊕ 1 ⊕ 1 ⊕ 0 = 0.)

Maybe the prof/TA just fucked it up? That happens all the time. (It doesn't necessarily mean anything that the error appears in both the spec pdf and the test cases. The test case inputs/outputs would have been copy+pasted straight to the pdf when they compiled the assignment spec; ironically, to avoid errors.) Dunno, perhaps shot off an email to a TA?
Yeah, it was the professor. How hard is it to compile and run the file yourself before posting it online? jfc :roll:
For reference, this was using Boost, so the output wasn't just a different pattern, but it actually halted execution and said it failed that test.
 

ConcernedAnon

Concerned and hopefully anonymous
kiwifarms.net
Java is horrifying and the people responsible for it should be executed.

Here is the standard implementation of addAll for ArrayList.
1580696542599.png

Note that it copies the contents of c into an array, then copies that array into it's own array basically creating one (maybe more) trash arrays the length of c. There I was thinking the standard implementation would do something sensible, but it turns out this was responsible for a lot of the allocations in my program.

This fixed it. FFS :stress:
1580696256894.png


There are probably two reasons for their exceptional implementation. One is that an array is a nice reliable format that is unlikely to do anything unexpected during the operation, and while it can't change size unexpectedly, because toArray is a member function you can't guarantee that it's values won't be suddenly changed. Two is that arraycopy is quite fast, however, toArray will either iterate over the collection or otherwise duplicate an array so this doesn't really provide any benefit.

I am baffled by their retarded decisions.
 

Shoggoth

kiwifarms.net
arraycopy
I'd assume everything runs faster and generates less garbage now. Do you have an estimate by how much?
It is weird to use arraycopy because it's fast when the entire collection is iterated over anyway, but perhaps the cost of add operations is also bigger. It's probably faster to iterate over a Collection and create and array of known size than it is to add the elements one by one
There's probably some n for which:
Code:
toArray(n) + arraycopy(n) < n * add
The interesting question is how it's affected by GC, which GC you're using, if you're using G1 how big is your eden space, and what's your typical n.
 
  • Like
Reactions: ConcernedAnon

Citation Checking Project

Wokescolds of the world, unite!
True & Honest Fan
kiwifarms.net
I'd assume everything runs faster and generates less garbage now. Do you have an estimate by how much?
It is weird to use arraycopy because it's fast when the entire collection is iterated over anyway, but perhaps the cost of add operations is also bigger. It's probably faster to iterate over a Collection and create and array of known size than it is to add the elements one by one
There's probably some n for which:
Code:
toArray(n) + arraycopy(n) < n * add
The interesting question is how it's affected by GC, which GC you're using, if you're using G1 how big is your eden space, and what's your typical n.
I think ∄n. To me this looks like defensive programming: copy the other collection just in case it changes on you.

Mutable Collections: Not Even Once

If we try to reason about the performance of improvedAddAll, I would argue that
Code:
addAll(n) = toArray(n) + ensureCapacity + arraycopy(n) + O(1)
improvedAddAll(n) = ensureCapacity + n * add[withinCapacity]
addAll(n) - improvedAddAll(n) = toArray(n) + arraycopy(n) - n* add[withinCapacity]
toArray(n) >= allocation + n*assignment
arraycopy(n) >= n*assignment
add[withinCapacity] = ensureCapacity[withinCapacity] + assignment
ensureCapacity[withinCapacity] = integer comparison
addAll(n) - improvedAddAll(n) >= allocation + n*(2*assigment - ensureCapacity[withinCapacity] - assigment)
addAll(n) - improvedAddAll(n) = allocation + n*(assignment - integer comparison))
The allocation is proportional to n, whereas the integer comparison could possibly be optimized out by a smart JIT. I doubt that addAll is more performant than improvedAddAll at any large n. Even if assignment < integer comparison, is the n at which both quantities are equal reachable in practice?
implementation of ArrayList.add
helper method for ArrayList.add

EDIT: actually a Java array assignment must include at least 1 integer comparison because the OOB condition is detected, so I would say that, unless the JIT is being very weird, assignment > integer comparison.
 
Last edited:

ConcernedAnon

Concerned and hopefully anonymous
kiwifarms.net
I'd assume everything runs faster and generates less garbage now. Do you have an estimate by how much?
It is weird to use arraycopy because it's fast when the entire collection is iterated over anyway, but perhaps the cost of add operations is also bigger. It's probably faster to iterate over a Collection and create and array of known size than it is to add the elements one by one
There's probably some n for which:
Code:
toArray(n) + arraycopy(n) < n * add
The interesting question is how it's affected by GC, which GC you're using, if you're using G1 how big is your eden space, and what's your typical n.
I didn't keep the profiling data, but I would estimate that this simple change saved 5-10 megabytes of garbage for every 10-20 seconds of runtime.
This issue appeared in some of my concurrent code in which several worker threads put their results into separate collections, and then once finished bulk copy all of their results into a shared collection protected by a mutex. Everything is reused so I had assumed that their storage would be too, and thus there would be no allocations after a couple rounds.


I think ∄n. To me this looks like defensive programming: copy the other collection just in case it changes on you.

Mutable Collections: Not Even Once

If we try to reason about the performance of improvedAddAll, I would argue that
Code:
addAll(n) = toArray(n) + ensureCapacity + arraycopy(n) + O(1)
improvedAddAll(n) = ensureCapacity + n * add[withinCapacity]
addAll(n) - improvedAddAll(n) = toArray(n) + arraycopy(n) - n* add[withinCapacity]
toArray(n) >= allocation + n*assignment
arraycopy(n) >= n*assignment
add[withinCapacity] = ensureCapacity[withinCapacity] + assignment
ensureCapacity[withinCapacity] = integer comparison
addAll(n) - improvedAddAll(n) >= allocation + n*(2*assigment - ensureCapacity[withinCapacity] - assigment)
addAll(n) - improvedAddAll(n) = allocation + n*(assignment - integer comparison))
The allocation is proportional to n, whereas the integer comparison could possibly be optimized out by a smart JIT. I doubt that addAll is more performant than improvedAddAll at any large n. Even if assignment < integer comparison, is the n at which both quantities are equal reachable in practice?
implementation of ArrayList.add
helper method for ArrayList.add

EDIT: actually a Java array assignment must include at least 1 integer comparison because the OOB condition is detected, so I would say that, unless the JIT is being very weird, assignment > integer comparison.
That's the funny thing though, the collection could change during toArray anyway so what's the difference really? Ultimately while there may actually be some performance benefit to their implementation for non-interactive applications, the advantage is meaningless for anything that requires responsiveness. Unnecessary GC costs simply cannot be tolerated.

My ideal implementation would look something like this
Java:
public boolean addAll(Collection<? extends E> c) {
    // I wrote this from memory so test this before use lol
    int newCount = c.size();
    int expectedSize = this.size + newCount;
    ensureCapacityInternal(expectedSize); // Reserve room

    int added = 0;
    Iterator<E> iter = c.iterator();
    try {
        while (iter.hasNext() && added < newCount) { // Write all expected elements
            E cur = iter.next();
            this.elementData[this.size + added++] = cur;
        }
    } finally {
        this.size += added; // Ensure completed additions are noted
    }
    if (added >= newCount) { // If there are more elements than initially noted
        while (iter.hasNext()) {
            E cur = iter.next();
            this.add(cur); // Iterate over them and add them the slow way
            added++;
        }
    }
    return added > 0;
}
This effectively captures the theoretical strengths of their implementation but without producing any garbage —ignoring iter— and I would have used this for improvedAdd if size and elementData were not private members of ArrayList

If you wanted to get really silly you could even add a branch that specializes the operation for c as an ArrayList, and then you could just use arraycopy without allocation and with known target behavior.
 
Last edited:

Shoggoth

kiwifarms.net
Unless we're getting into the weeds of cache coherency analysis, I think your analysis makes sense. Maybe calling add() in a loop messes up the cache while n assignments don't. I wouldn't even know how to test it. All of this is offset by the costs of garbage anyway..
which GC are you using? have you tried tuning the settings? maybe increasing the size of the eden space could help.
But you're still with unnecessary garbage.
If you feel like using immutable collections you could probably go for some lazy concatanation of chunked sequences and then you'd just stick a pointer at the ass of your array to the next array. You can even implement that kind of object yourself, with the pointer being null by default, and if it points to something, iteration continues on it. zero copy, max win.
 

ConcernedAnon

Concerned and hopefully anonymous
kiwifarms.net
Unless we're getting into the weeds of cache coherency analysis, I think your analysis makes sense. Maybe calling add() in a loop messes up the cache while n assignments don't. I wouldn't even know how to test it. All of this is offset by the costs of garbage anyway..
which GC are you using? have you tried tuning the settings? maybe increasing the size of the eden space could help.
But you're still with unnecessary garbage.
I think I probably will fuck around with the memory settings eventually, the default settings are way too conservative for a videogame really.

If you feel like using immutable collections you could probably go for some lazy concatanation of chunked sequences and then you'd just stick a pointer at the ass of your array to the next array. You can even implement that kind of object yourself, with the pointer being null by default, and if it points to something, iteration continues on it. zero copy, max win.
The issue with doing something like this is that the immutable collection will inevitably end up as garbage anyway. Using ArrayLists means that the backing storage is reused.
Basically this task is run every frame, and the results of the previous frame are usually obsolete, so if I were to use immutable collections I'd have to throw them away each round. Instead I can just use clear and reuse the storage next round, which is basically just integer assignment Wait... I should probably check that, but do I even want to know what they've done :cryblood:?
 

Shoggoth

kiwifarms.net
I think I probably will fuck around with the memory settings eventually, the default settings are way too conservative for a videogame really.
Both GC and memory settings are stuff you can experiment with. You can add to that JVM versions, as later version have more GCs available.
immutable... throw them away each round
In most graphical applications the state of the program actually changes quite a little between frames. Is it possible not to recalculate everything but to just update the difference (regardless of immutability, but that mode of work lends itself better for immutable collections)
 

Splendid

Castigat ridendo mores
True & Honest Fan
kiwifarms.net
I have a C++ question: Is modern CMake (as elaborated upon here) just a meme, or is it enjoying use in the wild?
That's actually kind of hard to answer since a lot of projects will at most disclose their language and cloud provider. The majority don't even do that. It depends on what you are looking for. There are a lot of tools that rarely see use outside of corporate environments, and a lot that are pretty much only used for FOSS stuff.
 
  • Informative
Reactions: SPAAAAAAACE

ConcernedAnon

Concerned and hopefully anonymous
kiwifarms.net
Both GC and memory settings are stuff you can experiment with. You can add to that JVM versions, as later version have more GCs available.

In most graphical applications the state of the program actually changes quite a little between frames. Is it possible not to recalculate everything but to just update the difference (regardless of immutability, but that mode of work lends itself better for immutable collections)
This is a game though and I probably should have clarified, the operation in question was involving collision checking. The optimizations I've been working on recently are intended to limit the number of times the collisions have to be recalculated, but they're not quite finished yet. Right now the overhead of the optimizations are using up about as much time as they save :lol: Though that said it has improved performance in extreme cases like 500+ physics objects, these aren't really realistic for the game.

Time to optimize my optimizations!


Also mercifully the implementation of clear is actually quite reasonable;
1580850651540.png

For my application I would have probably eschewed the null'ng of the data, but I fully understand why they've chosen to do it.
 
  • Like
Reactions: Shoggoth

ConcernedAnon

Concerned and hopefully anonymous
kiwifarms.net
Do all the physics objects change positions every frame? I apologize if I'm stating the obvious but couldn't you cut down on the checks if you limit your recalculations only to the objects which changed state?
That's what I've been working on. The issue is that you can't just stop checking for collisions, as the object needs to be awoken when an unbalanced force is applied. My solution has been to 'tenure' collisions whose parameters haven't changed for a frame or two. The issue with that is that you need to be able to look up tenured collisions in some kind of efficient manner, so that you can skip the actual calculations.

At the moment I'm just using a sorted dictionary of sorts, but I think I might try to create some kind of weird singly-linked-list-esq thing that directly associates the collisions with the colliders that caused them. I'm concerned that the additional structure and management of such a solution might be more trouble than it's worth, but we'll see.
 
  • Like
Reactions: Shoggoth
Tags
None