Use bottom code [reference]:
app.UseStaticFiles(new StaticFileOptions()
{
OnPrepareResponse = context =>
{
context.Context.Response.Headers.Add("Access-Control-Allow-Origin", "*");
context.Context.Response.Headers.Add("Access-Control-Allow-Methods", "POST, GET, DELETE, PUT, PATCH, OPTIONS");
}
});
Thanks to @HansPassant, here are the registry keys and values required for serving a single class with a single interface from a C# Windows Service called MyService
:
Windows Registry Editor Version 5.00
[HKEY_CLASSES_ROOT\TypeLib\{TlbGuid}]
@=""
[HKEY_CLASSES_ROOT\TypeLib\{TlbGuid}\1.0]
@="MyService"
[HKEY_CLASSES_ROOT\TypeLib\{TlbGuid}\1.0\0]
[HKEY_CLASSES_ROOT\TypeLib\{TlbGuid}\1.0\0\win64]
@="C:\\path\\to\\MyService.tlb"
[HKEY_CLASSES_ROOT\TypeLib\{TlbGuid}\1.0\FLAGS]
@="0"
[HKEY_CLASSES_ROOT\Interface\{InterfaceGuid}]
@="IMyInterface"
[HKEY_CLASSES_ROOT\Interface\{InterfaceGuid}\ProxyStubClsid32]
@="{00020424-0000-0000-C000-000000000046}"
[HKEY_CLASSES_ROOT\Interface\{InterfaceGuid}\TypeLib]
@="{TlbGuid}"
"Version"="1.0"
[HKEY_CLASSES_ROOT\CLSID\{ClassGuid}]
@="My class name"
"AppID"="{AppIdOrGuid}"
[HKEY_CLASSES_ROOT\CLSID\{ClassGuid}\Version]
@="1.0"
[HKEY_CLASSES_ROOT\AppID\{AppIdOrGuid}]
"LocalService"="MyService"
Then the implementation is easy on the c++ side:
#import "C:\path\to\MyService.tlb"
...
IMyInterfacePtr ptr(__uuidof(MyImplementingClass));
ptr->CallSomeFunction();
...
If you mean Microsoft (Azure) Active Directory, you need Strapi Enterprise: https://docs.strapi.io/dev-docs/configurations/sso
I did all of these you said but still I receive same error message. then I thought maybe it's because of internet connection and tried to change proxy ,but that didn't work either. finally I downgraded gradle from 8.12 to 8.11.1 and the problem solved.
If you are out of credits, you will also get a 404.
Can you tell me what to do if I use this version of the code?
<p id="block1"> HIDEN TEXT</p>
<p id="show" onclick="$('#block1').show(1000)"> SHOW TEXT </p>
<p id="block2"> HIDEN TEXT</p>
It is necessary that when you click on an element SHOW, the second element (id="block2") is displayed too
Did you mean:
@model TeamModel
instead of:
@inject TeamModel
? @inject
means you want to register a dependency with DI container and then DI container provides the dependency at run time. @model
means there's a linked class that is supplying values for your razor page to use.
Open vs code setting
In key field enter your extension in that format "*.extension"
Example: *.html
In value field enter name of the language
Example: html
Key : *.sc
Value : python
If you want to run only through a chrome browser mode without headless, then you can use the below command: npx cypress run --headed --browser chrome
The problem was resolved by downgrading the @originjs/vite-plugin-federation package to version 1.3.6 from 1.3.8.
Source: vite-plugin-federation #660.
I want in this case all s to be proportionally up-scaled (i.e. 150%), because all my layout is rem-based.
That doesn't make sense. Why should an image increase in size, and decrease in quality, because the user asked for larger text? I'd advise against it.
However, if you did really want to do this just absolutely position an image inside an element that has it size based on rem.
As merely a standard (ignoring implementations), of the amendments to ISO 639:
Consequently, if possible, utilize Alpha 3, with a fallback to Alpha 2 where relevant standards necessitate it (an example is the aforementioned – BCP 47).
registerReceiver(
downloadReceiver, IntentFilter(DownloadManager.ACTION_DOWNLOAD_COMPLETE),
RECEIVER_EXPORTED
)
if you choose RECEIVER_NOT_EXPORTED or not choosen any value, you can not receive broadcast other app or OS. Adding RECEIVER_EXPORTED fixed my problem.
This package does not support windows, the operating system on which you typed the command. It depends on a library released only for linux and macos.
You could give a try to Windows Subsystem for Linux or Docker/Rancher/colima/podman.
Segoe UI is a Windows font, it doesn't exist on Android.
Using the example from How to update insert JSON property in an array of object with T-SQL I applied your scenario:
SELECT @json = JSON_MODIFY(@json, CONCAT('$.Options[', [key], '].IsDeleted'), 'false')
FROM OPENJSON(@json, '$.Options')
Did you try changing "LengthParam"
to "Length"
?
This works on my side.
it does not work it is a scam just go do scratch
This post has been taking dust for a while but just in case this could help anyone as me I will leave my fix.
I'm working swiper in React. When user navigate to a product, he can slide throughout different images. When user go to another product, the swiper images is initialized in the position of the previous product image swiper carousel.
I tried several approaches, using methods or initialSlide = 0, ... published in Swiper library, using customized ref for the swiper carousel, ... The only solution working was to add a key into the React component where is located the swiper carousel setting the value as the product.code. In that way React controls/handles updates for this component, not leaving in the virtual DOM with the previous status.
Hope someone could be helped with this strange behaviour.
The project template for your shared code is ".NET MAUI Class Library".
You can put your LoginPage.xaml and other shared code there and reference this library from your other projects. Then you should be able to navigate to it as if it were directly in there.
If you have available Java already but java --version
says there is no JDK then run:
brew update && brew install --cask temurin
This will fix the path for JAVA.
Solved. The problem was in the poor quality 3D models. One such disk had over 600,000 polygons.
so if you want a video player i made a player on my github. click here. However its not a flash based player.
If I try to include in the Template, I get an Error massage. The Path doesn't exist. If I use the command vendor/bin/typo3 site:sets:list the extension wacon-cookie-manengment ist not listed.
Also I try to include in config.yaml dependencies:
I get an Error Massage - that the extension doesen't exist.
Maybe something didn't work with the installation in Composer via Plesk?
As @YCR correctly said, decision tree is part of machine learning. The answer is "it depends". It depends on the data type, quantity, and on the computational resources that you have for the training. For tabular data sets, a very good starting point is random forest (ensemble of decision trees). You can find implementations in many libraries like sklearn and opencv.
If you have an image database, like photos that should be classified as jobs (a painter, a farmer, and so on) I recommend convolutional neural networks like ResNets. A nice trick to avoid a lot of computational time in training is using transfer learning (for example from the ImageNet database).
What pac4j implementation (https://www.pac4j.org/implementations.html) do you use?
We are having the same problem. Did you work out the solution ?
Had the same problem and found the solution here:
Tried all of these to no avail, but eventually discovered that one of the projects had a build configuration set to x64 instead of Any CPU. Changing this fixed the issue
I'm having the exact same problem. Did you find solution to it?
I am having the same issue using the "Save Emails and Attachments" extension for Google Sheets. I am not sure if this has something to do with a firewall issue since the app server seems to be in India...?
question, can I use ffmpeg -r VIDEOFPS -i VIDEO -r VIDEOFPS -i VIDEO -lavfi blend=all_mode=grainextract -c:v libx264 -crf 0 -an ./out.mp4
for images ?, to extract the differences between 2 images jpg/png in an output image? thanks a lot
If u mean that u want it to install with respect the content of pubspec.lock then u have to run the following command
flutter pub get --offline
note
Did you solve this problem? Somehow I have same issue here, my array get shortened by 1 after just 3 iterations; I tried to reset its size before getting into iteration again but not work, it will still pop out similar error but a few steps after. I do believe it has sth to do with the way they convert gridpoints into matrix, but I dont know how to solve this
click the 3dots in chrome, go to "cast, save and share" -> Install page as app...
I believe that that you can check KEY_EVENT_RECORD.wVirtualScanCode
. It should contain SCANCODE_SIMULATED
(0x0200
) flag in case of press of real AltGr key - as part of simulation of Alt + Ctrl press.
Getting similar.
I have a row of 2 cells, each cell nominally holding a panel.Tabs widget - each Tabs widget holding 5x 3D views.
On first draw, both are fine. But one subsequent update and the other cell of the row drops out some, but not all, of the 3D views.
The data is correctly held right up to the point of re-issuing to the main row cell.
Will try and create an MRE if it helps, but a bit time-pressured at the minute.
Why not just skip that extra if-statement and include the !is_admin()
within the first one? Like this:
function exclude_product_cat_children($wp_query) {
if ( isset ( $wp_query->query_vars['product_cat'] ) && $wp_query->is_main_query() && !is_admin() ) {
$wp_query->set('tax_query', array(
array (
'taxonomy' => 'product_cat',
'field' => 'slug',
'terms' => $wp_query->query_vars['product_cat'],
'include_children' => false
)
));
}
}
add_filter('pre_get_posts', 'exclude_product_cat_children');
The issue appears to have been resolved by placing interleave on;
in both application blocks. I assume that this makes it impossible for audio and video data to arrive separately, and therefore much more difficult for a video player to misconstrue the number of streams in the data. It feels a bit hacky, because I still don't understand the issue, but it seems to have worked.
Your idea of using a NoSQL database for your dating app backend makes a lot of sense, especially with the dynamic and often complex relationships between users in the same collection. Array extraction can indeed be a viable approach, as it allows for efficient querying and management of user interactions like likes and dislikes.
Its a very common issue and reproduce if you delete the podfile manually and create it again using pod init.
Use the following Steps to easily resolve this Issue but make sure to backup the native code if exist or backup the settings you make in the runner:
This will help you for sure. Best of luck
There are no CICD/management options for ADX dashboards today. We do have public API support, Git integration and more in the same technology in Fabric offering. ADX dashboards are called "Real time dashboards" in Fabric and provide the same functionality. It is possible to export your dashboards from ADX Web UI to Fabric and it is also possible to connect real time dashboards to ADX clusters in Azure. For more details about Fabric - https://www.microsoft.com/en-us/microsoft-fabric
The second method using the INDIRECT function works and you can just simply put the CONCAT statement in to the INDIRECT function, you don't need to put it in a separate cell
Eg: COUNTIF(INDIRECT(CONCAT("Start ",A1," End")),[Criteria])
When virtualization is enabled, only the visible items are rendered in the DOM, and the space for the non-visible items is calculated based on the total number of rows and columns. This approach helps optimize performance by not rendering all the elements at once.
However, the delay you're observing happens because the DOM takes a small amount of time (milliseconds) to render and re-render the cells of the virtualized table as you scroll. This is especially noticeable when scrolling rapidly because the scroll event is triggered at a higher frequency than the time it takes for the DOM to catch up with the rendered elements.
The issue is coming from the backend, the cors configuration didn't expose "Authorization" header.
find this file project\app\Http\Middleware\VerifyCsrfToken.php then find this array $except =['url/*'] this * indicates to your function
I encountered the same problem when I used the old computer at my house to connect my company's server with Remote-SSH.
Zac's solution about switching the Jupyter extension version solved the problem! The only difference is I switched "from" the pre-release to the release version.
Thus I suspect the real trick is to reinstall the Jupyter extension.
(I wanted to add a comment instead of answering but my reputation wasn't high enough)
Fixed successfully with the index suggested by ChatGPT!
From 14s to 28ms:
Aggregate (cost=11377.88..11377.89 rows=1 width=8) (actual time=28.306..28.307 rows=1 loops=1)
-> Nested Loop (cost=0.99..11377.82 rows=26 width=4) (actual time=0.029..26.546 rows=34893 loops=1)
-> Index Scan using idx_36ac99f1296cd8ae on link l1_ (cost=0.43..194.94 rows=322 width=4) (actual time=0.013..0.156 rows=190 loops=1)
Index Cond: (team_id = 37)
-> Index Scan using idx_20b8ff21ada402718b8e8428 on stat s0_ (cost=0.56..34.39 rows=34 width=8) (actual time=0.004..0.122 rows=184 loops=190)
Index Cond: ((link_id = l1_.id) AND (created_at > '2024-12-14 07:46:04+01'::timestamp with time zone))
Planning Time: 0.403 ms
Execution Time: 28.343 ms
Thanks Laurent for your advice and thanks Richard for EXPLAIN ANALYZE
.
After discovering through testing that setting text colors in the typography
of MaterialTheme
would invalidate LocalContentColor
, I removed them as shown in the following code. As a result, not only does LocalContentColor
work properly now, but the colors parameter of TextField
also functions correctly.
If you set colors on Typography
, these colors from styles already bound to certain parts of Material components, such as TextField
, will take precedence over the colors parameter of TextField
or LocalContentColor
MaterialTheme(
colorScheme = colorScheme,
typography = Typography(
bodyMedium = TextStyle(
color = Color.Black // removed
),
// ...
),
content = content
)
I suggest to search "is Not supported" in node_modules by using VS code or grep. Because Linux can be a variable.
After two weeks of searching, I finally find the problem. It was because all pages have an animated gif image with resolution 1500x1500. The problem solved by resizing the gif image to 100x100.
Maybe this can help you with part of the problem: https://quasar.dev/quasar-cli-webpack/developing-cordova-apps/managing-google-analytics/
To manage GCP ops-agent using Terraform, first, configure the Google Cloud provider in your Terraform setup. Then, create a google_compute_instance resource and specify metadata for enabling the ops-agent with the necessary logging and monitoring configurations. Use a startup script in the metadata_startup_script block to install the ops-agent on the virtual machine instances. You can also define additional configuration for ops-agent based on your monitoring needs. Finally, run terraform apply to deploy the resources and configure the ops-agent automatically.
For those who face this issue in Fragment. Please don't forget to put @AndroidEntryPoint annotation!
@AndroidEntryPoint //--> dont forget this!!!
class ExampleClass : Fragment() {
@Inject //--> dont forget this!!!
lateinit var appNavigator: ComposeNavigator
To integrate many TYPO3 extensions you need to integrate the TypoScript settings. Mostly done in the "Template" module, from v13 on you can add TypoScript per site in the Site Sets.
Documentation:
I'm not sure when this was added, but do take a look.
I had the exact same situation you are describing however without having more information the root cause of your precise problem is not easy to identify.
What I would suggest is to double check the IAM policy attached to the role you are using and verify that you are using the RDS instance "Resource ID" and not the "Instance id".
I successfully implemented a real-time chat application using Spring Boot, WebSocket, SockJS, and Stomp. My setup works seamlessly with both Angular and React Native Mobile App. I want to share the configuration and code to help others who might encounter similar requirements. Here’s my working setup:
Backend Configuration: Dependencies:
implementation 'org.springframework.boot:spring-boot-starter-websocket'
implementation 'org.webjars:sockjs-client:1.0.2'
implementation 'org.webjars:stomp-websocket:2.3.3'
WebSocket Configuration:
@Configuration
@EnableWebSocketMessageBroker
@Order(Ordered.HIGHEST_PRECEDENCE + 99)
public class WebSocketConfig implements WebSocketMessageBrokerConfigurer {
@Override
public void registerStompEndpoints(StompEndpointRegistry registry) {
registry.addEndpoint("/ws")
.setAllowedOrigins("http://192.168.0.184:4201","http://localhost:4200")
.withSockJS();
}
@Override
public void configureMessageBroker(MessageBrokerRegistry brokerRegistry) {
brokerRegistry.setApplicationDestinationPrefixes("/app");
brokerRegistry.enableSimpleBroker("/topic", "/queue", "/user");
brokerRegistry.setUserDestinationPrefix("/user");
}
}
Controller for Handling Messages:
@RequiredArgsConstructor
@RestController
public class RealTimeChat {
private final SimpMessagingTemplate messagingTemplate;
private final ServiceMessage serviceMessage;
private final WebSocketSessionRegistry sessionRegistry;
private static final Logger LOGGER = LoggerFactory.getLogger(RealTimeChat.class);
@MessageMapping("/chat.sendPrivate")
public void sendPrivateMessage(@Payload DtoMessage chatMessage) {
try {
String recipientUser = chatMessage.getUser() + "";
LOGGER.info("Sending message to: {}", recipientUser);
messagingTemplate.convertAndSendToUser(recipientUser, "/queue/private", chatMessage);
} catch (Exception e) {
e.printStackTrace();
}
}
@MessageMapping("/chat.register")
public DtoMessage register(@Payload DtoMessage chatMessage, SimpMessageHeaderAccessor headerAccessor) {
try {
String sessionId = headerAccessor.getSessionId();
headerAccessor.getSessionAttributes().put(chatMessage.getUser() + "", sessionId);
sessionRegistry.registerSession(chatMessage.getUser(), sessionId);
return chatMessage;
} catch (Exception e) {
e.printStackTrace();
throw new RuntimeException(e.getMessage());
}
}
}
Frontend Configuration: React Hook for Connecting to WebSocket:
useEffect(() => {
const socket = new SockJS(wsUrl);
const client = new Client({
webSocketFactory: () => socket,
debug: str => console.warn(str),
onConnect: () => {
console.log('Connected to WebSocket');
client.subscribe('/chat.register', message => {
const receivedMessage = JSON.parse(message.body);
console.log('[WebSocket] Received message:', receivedMessage);
});
},
onDisconnect: () => {
console.log('Disconnected from WebSocket');
},
onStompError: error => {
console.error('Stomp error:', error);
},
});
client.activate();
setStompClient(client);
return () => {
client.deactivate();
};
}, [userId, tenantId]);
Sending a Private Message:
const sendMessage = () => {
const message = {
userIdSend: 'ed477317-5fe8-4841-a13d-e45e01eb94be',
userIdTo: '1',
content: 'Testing message...',
type: true,
};
stompClient.publish({
destination: '/app/chat.sendPrivate',
body: JSON.stringify(message),
});
setNewMessage('');
};
Thank you for the hints. This worked for me when the FFProbe output didn't redirect to a file.
It's A slight Variation of the solution above. Don't need sexagesimal for adjusting -itsoffset .srt, just the seconds, so good to go using absolute path of ffprobe location and the batch variables to indicate path of .mp3 and MP3_Duration.txt
@echo off
cmd /c C:\FFMpeg\Bin\ffprobe -loglevel error -select_streams a -show_entries stream=duration -of default=noprint_wrappers=1:nokey=1 -i "%~d1%~p1%~n1.mp3" > "%~d1%~p1MP3_Duration.txt"
'with no internet connection' is the reason of above 'unable to load the service index for source https://api.nuget.org/v3/index.json' error. the app wants to isntall required packages from nuget.org but can't access the nuget api.
If the ADO server is 100% no chance of getting the internet access (even proxy is not an option), you have to use the build-in Azure Artifacts in ADO (that can be used to serve as a package server). You basically manually download required packages from nuget.org and manually upload to the ADO feed, then finally point your build server to use ADO as a nuget server.
After upgrading windows 11 to version 24H2 this trick stopped to work for me.. Function DeviceIoControl() returns false, GetLastWin32Error() return 87 (0x57) - parameter is incorrect. I tried various changes in the structure, but without good results. Does anyone know what the structure should look like now?
* {font-family: 'Segoe UI'; font-size: 10pt;}
select {border: none; color: #666;}
<label>
<span>Sell in</span>
<select>
<option>United States</option>
</select>
</label>
666
Zo'rgg g t ergre g erg regeg rg r gr
This is more of a Spring question, but if this is specifically about getting properties, you can just access it directly using the JDK classes using this static method java.lang.System#getProperty(java.lang.String)
without relying on Spring injection.
One query, can we add these 2 def functions
I don’t click on those links. Next time, try using more common Websites like Google. But i think you need to press JavaScript buttons(guess).
Find the APIs via chrome/wireshark from which endpoint the data gets loaded. If you have that you can simple curl the api
Try disguissing as Google Bot (Old hack, but it often still works): Try opening the page as Google Bot. Sometimes websites show all data to improve their search ranking.
Simple and always works: Scrape with Selenium (Undetectable Selenium
is the best).
You can control the browser and do whatever you want;
bot detection can’t really catch it.
Go to Firebase authentication sign in for google for your project and there is web_client_id while you will scroll down the tab copy that and paste it as requestidtoken.
First we need to encode the objectGUID byte array into a hexadecimal representation with escaped backslashes () for each byte
private String encodeToHexWithEscape(byte[] bytes) {
StringBuilder hexBuilder = new StringBuilder();
for (byte b : bytes) {
// Format each byte as a two-digit hex value with a leading backslash
hexBuilder.append(String.format("\\%02X", b & 0xFF));
}
return hexBuilder.toString();
}
Then use it in filter for ldap search:
String hexEncodedGUID = encodeToHexWithEscape(objectGUID);
String filter = String.format("(objectGUID=%s)", hexEncodedGUID);
// Perform the search
ldapTemplate.search(
"", // Base DN (modify according to your directory structure)
filter,
new MyContextMapper()
);
Short answer
As we know, the key is with respect to the container. Therefore even if there is no change in keys, a change in container will lead to recreating the same component.
Detailed answer
The above point puts the emphasis on the container. On the other side, a recursive rendering as the below code does, has a significant impact on its resulting containers.
export default function Group({ group }: Props) {
...
else return <Group key={ele.id} group={ele} />
...
console.log(elements);
return <div>{elements}</div>;
}
The console log in the code will help us to know that with the given data, this component is rendered twice with two separate data. It means the below single declaration rendered in two separate times. It is true since there is a recursive call for the nested group with id 2 in addition to the initial call for the id 0.
<Group group={group} key="0" />
Let us view the console log generated:
// Array 1
// 0:{$$typeof: Symbol(react.element), key: '1', ...
// 1:{$$typeof: Symbol(react.element), key: '2', ...
// 2:{$$typeof: Symbol(react.element), key: '5', ...
// Array 2
// 0:{$$typeof: Symbol(react.element), key: '3' ...
// 1:{$$typeof: Symbol(react.element), key: '4' ...
Observation
These are the two distinct arrays React has created for us while rendering the component. In this case, the two arrays containing the items 1,2,5 and items 3,4 respectively.
Whenever there is a change in the data resulting a change in the containing array, react will remove the component from the container it has been changed from, and will add the component to the container it has been changed to. This is the reason for the issue we have been facing in this post while moving an object from one group to another.
Coming back to the point again, we face this issue since internally there are separate arrays for each nested group.
One possible solution
One solution may be to render in a way that it does not produce separate containers with respect to each group. However, this approach will necessitate a review on the recursive render. We need to find a way to render it so that the entire items contained in a single array, so that we can move items as we will. And react will not either remove or add components.
The following sample code demoes two things:
a. The existing render and the issue we now face.
b. Render without recursion, so that the issue may be addressed.
App.js
import { useState } from 'react';
export default function App() {
const [someData, setSomeData] = useState(getInitialData());
return (
<>
Existing declaration
<Group group={someData} key="0" />
<br />
proposed declaration List
<br />
<List obj={someData} key="00" />
<br />
<button
onClick={() => {
setSomeData(moveTextFromGroup2toGroup0());
}}
>
move text 3 from Group 2 to Group 0
</button>
<br />
<button
onClick={() => {
setSomeData(moveTextWithinGroup2());
}}
>
move text 3 withing Group 2
</button>
<br />
<button
onClick={() => {
setSomeData(getInitialData());
}}
>
Reset
</button>
</>
);
}
function List({ obj }) {
let items = [];
let stack = [];
stack.push(obj);
while (stack.length) {
const o = stack[0]; // o for object
if (o.type === 'group') {
// if group type, then push into stack
// to process in the next iteration
for (let i = 0; i < o.groups.length; i++) {
stack.push({ ...o.groups[i], groupId: o.id });
}
} else {
// if not group type, keep to render
items.push(<A key={o.id} label={'Group ' + o.groupId + ':' + o.text} />);
}
stack.shift(); // remove the processed object
}
return items;
}
function Group({ group }) {
const elements = group.groups.map((ele) => {
if (ele.type === 'other')
return <A key={ele.id} label={'Group ' + group.id + ':' + ele.text} />;
else return <Group key={ele.id} group={ele} />;
});
console.log(elements);
return <div>{elements}</div>;
}
function A({ label }) {
const [SomeInput, setSomeInput] = useState('');
return (
<>
<label>{label}</label>
<input
value={SomeInput}
onChange={(e) => setSomeInput(e.target.value)}
></input>
<br />
</>
);
}
function getInitialData() {
return {
id: 0,
type: 'group',
groups: [
{
id: 1,
type: 'other',
text: 'text 1',
},
{
id: 2,
type: 'group',
groups: [
{
id: 3,
type: 'other',
text: 'text 3',
},
{
id: 4,
type: 'other',
text: 'text 4',
},
],
},
{
id: 5,
type: 'other',
text: 'text 5',
},
],
};
}
function moveTextWithinGroup2() {
return {
id: 0,
type: 'group',
groups: [
{
id: 1,
type: 'other',
text: 'text 1',
},
{
id: 2,
type: 'group',
groups: [
{
id: 4,
type: 'other',
text: 'text 3',
},
{
id: 3,
type: 'other',
text: 'text 4',
},
],
},
{
id: 5,
type: 'other',
text: 'text 5',
},
],
};
}
function moveTextFromGroup2toGroup0() {
return {
id: 0,
type: 'group',
groups: [
{
id: 1,
type: 'other',
text: 'text 1',
},
{
id: 3,
type: 'other',
text: 'text 3',
},
{
id: 2,
type: 'group',
groups: [
{
id: 4,
type: 'other',
text: 'text 4',
},
],
},
{
id: 5,
type: 'other',
text: 'text 5',
},
],
};
}
Test run
On loading the app
Test to move the component Text 3 in Group 2 to Group 0 - using the recursive rendering
After clicking the button "move text 3 from Group 2 to Group 0".
Observation
The component has been moved from Group 2 to Group 0 as we can verify from the labels, however, the input has been lost. It means, React has removed the component from Group 2 and newly added it to Group 0.
We shall do the same test with rendering without recursion
After clicking the button "move text 3 from Group 2 to Group 0".
Observation
The component has been moved by retaining the input. It means, React has neither removed nor added it.
Therefore the point to take note may this:
Components with keys will retain retain states as long as its container is not changing.
Aside : The component without keys will also retain states as longs its position in the container is not changing.
Note:
The sole objective of the proposed solution is not to say do not use recursive rendering and use imperative way as the sample code does. The sole objective here is to make it clear that - Container has great significance in retaining states.
Citations:
Is it possible to traverse object in JavaScript in non-recursive way?
I did that by creating docker image that COPY needed config file to some path and then sets CMD["--config=/path/to/config"]
If you don't have any code for ScriptableRenderPass or anything like that, you should manually disable compatibility:
go to Edit > Projects Settings > Graphics, scroll straight to bottom to "Render Graph" and uncheck "Compatibility Mode (Render Graph Disabled)" checkbox
Excel serial dates are the number of days since January 1, 1900.
Excel's day 1 corresponds to 1900-01-01. So, add the Excel serial number (45669 in this case) as days to this base date using DATE_ADD.
The Query will look like this:
SELECT DATE_ADD('1900-01-01', INTERVAL 45669 DAY) AS normal_date;
I was initially confused by the error as I was not using fleet-manager managed ec2 instances in my Terrform code.
After further investigation, I realized that the credit details AWS had expired, hence I had outstanding bills. Once this was settled, normal quota and operation resumed.
Sometimes, the error is a means of AWS throttling an account.
Did you manage to fix this ? I have the same issue and i cannot use port 4000 I tried changing all "4000" to another port on both docker and .env buth i get container unhealthy
To anyone finding this answer in 2025 and beyond, the Docker post install documentation also recommends creating the docker
group and adding your user to it.
It will be less effective. Schemas rarely change, but rows are frequently added or deleted. So, what you're saying is correct only if the code never changes. I would like to ask why you believe a single row is more efficient.
You can modify the database table without restarting the app. However, changes to the database often require corresponding code changes, which usually necessitate restarting the application.
I've read elsewhere that the only option in my scenario is to create a new contact and delete the old one.
From when I last checked a year or so ago, this still holds true. This is how I've implemented it in my application.
OK, so first I need the contact ID. Apparently the only way to obtain this (if you don't already have it) is to use the search by email endpoint
This is also true. Additionally, take into account that contact creation is an asynchronous process that may take several minutes to complete, so the contact ID is not immediately available after creation. Other operations such as adding a contact to a list may also have a delay of seconds or minutes to be visible to subsequent "GET"-type requests.
Am I forced to additionally determine which unsubscribe groups the original contact belonged to and replicate their unsubscribes?
I'm afraid so, yes. I would also check if a deleted contact's email address is also removed from a Global or Group unsubscribe list. It may just as well remain, which is probably not an immediate problem for you, but it might cause an unexpected situation if the user re-subscribes on the new email and then reverts to the old email and has their emails blocked (admittedly fairly unlikely).
I mostly don't use Unsubscribe Groups. I use list membership to denote whether a contact is subscribed or not. If they unsubscribe, I remove them from the list. This may also be an option for you, but I'm running into other challenges here due to the intervals in which segments update based on list membership. Unsubscribe Groups are probably more reliable.
This whole process seems so badly designed for what must be an extremely common scenario so I'm hoping someone out there can tell me what I'm missing.
I agree that this developer experience is horrific. You're not alone in this feeling. I don't think you're missing anything though.
You might have moved files around so verify that the path in the content
property in tailwind.config.ts
points to the correct file.
This is a workaround until Gitlab fix this problem.
If you add a temp job which always resolves to false in gitlab-ci.yml
then this fixes the problem. e.g.
temp:
script: echo
rules:
- if: $EMPTY_VAR # Always false
It seems like your navigation issue might be caused by theme conflicts, plugins, or JavaScript errors. Try clearing your cache, checking for errors in the Developer Tools, and disabling plugins to see if one is the problem. You could also switch to a default theme like Twenty Twenty-One to check if it's a theme issue. Ensure everything is updated, and if you've made customizations, try disabling them. If nothing works, contact the theme’s support team for help.
Checking the tier seems to be a part of the authorization layer, same as checking if the user has logged in.
I don't know how small you like your microservices, but you can either:
Ok, this report will go into "The Twilgiht Zone" category.
@David Browne - microsoft and @Steve Py, your comments were enough to make me dug further.
As you suggested this wouldn't be an EF issue I rechecked my code. I found out that in MAUI project I defined PROJECT_TABLET for android target only, but I run code on Windows target. Therefore, the code was properly compiled against DatabaseGeneratedOption.Computed.
Clean and recompile after fix and the problem is solved, as expected.
Still, it made we wonder why I didn't get compile-time error in the line
#if !PROJECT_TABLET
This is not code and will generate compile time error.
#endif
But I can't reproduce this behavior. I tend to triple check my code before posting and I'm 100% sure that I copied SQL code generated while test code above included. Either my memory crashed miserably or I felt in VS2022/MAUI hole with clean-delete-close reboot. I'll never know...
Try to change your font then it will work.
The issue is that port 8080 is being used by OpenWebUI inside it's docker container by default. Even though you see it on port 3000 outside of docker, inside it's own docker it's connecting to itself when it tries localhost:8080 or any equivalent.
From OpenWebUI's perspective localhost:8080 is internal to the container. It needs to be remapped so that either SearxNG is on a different port relative to OpenWebUI or OpenWebUI is calling it as a named service. Honestly, named services are the best way to address this because it prevents conflicts from everything else that wants to use these ultra common ports.
There's some directions on how to remap it located here: https://docs.openwebui.com/tutorials/integrations/web_search/
Hello this a result i could've achieve that could help you. i've used a js script to make this. please let me know if it works well for you
class WeekGenerator {
constructor() {
this.currentWeek = [];
this.SelectedDateList = [];
this.selectedYear = new Date().getFullYear();
this.selectedMonth = new Date().getMonth();
this.selectedDay = new Date().getDate();
this.weekDays = [
"Sunday",
"Monday",
"Tuesday",
"Wednesday",
"Thursday",
"Friday",
"Saturday",
];
this.monthNames = [
"January",
"February",
"March",
"April",
"May",
"June",
"July",
"August",
"September",
"October",
"November",
"December",
];
}
generateCurrentWeek(day, month, year) {
const selectedDate = new Date(year, month, day);
const startOfWeek = new Date(selectedDate);
startOfWeek.setDate(selectedDate.getDate() - selectedDate.getDay());
this.currentWeek = [];
this.SelectedDateList = [];
for (let i = 0; i < 7; i++) {
const currentDate = new Date(startOfWeek);
currentDate.setDate(startOfWeek.getDate() + i);
const formattedDate = `${currentDate.getFullYear()}-${(
currentDate.getMonth() + 1
)
.toString()
.padStart(2, "0")}-${currentDate
.getDate()
.toString()
.padStart(2, "0")}`;
this.currentWeek.push({
date: currentDate.getDate(),
day: this.weekDays[currentDate.getDay()],
month: this.monthNames[currentDate.getMonth()],
year: currentDate.getFullYear(),
});
this.SelectedDateList.push({ date: formattedDate });
}
this.displayWeek();
}
previousWeek() {
this.adjustWeek(-7);
}
nextWeek() {
this.adjustWeek(7);
}
adjustWeek(offsetDays) {
const firstDayOfCurrentWeek = new Date(
this.selectedYear,
this.selectedMonth,
this.selectedDay
);
firstDayOfCurrentWeek.setDate(firstDayOfCurrentWeek.getDate() +
offsetDays);
this.selectedYear = firstDayOfCurrentWeek.getFullYear();
this.selectedMonth = firstDayOfCurrentWeek.getMonth();
this.selectedDay = firstDayOfCurrentWeek.getDate();
this.generateCurrentWeek(
this.selectedDay,
this.selectedMonth,
this.selectedYear
);
}
displayWeek() {
const weekDisplay = document.getElementById("weekDisplay");
weekDisplay.innerHTML = "";
this.currentWeek.forEach((dayInfo) => {
const li = document.createElement("li");
li.textContent = `${dayInfo.day}, ${dayInfo.date} ${dayInfo.month}
${dayInfo.year}`;
weekDisplay.appendChild(li);
});
}
}
const weekGenerator = new WeekGenerator();
weekGenerator.generateCurrentWeek(
weekGenerator.selectedDay,
weekGenerator.selectedMonth,
weekGenerator.selectedYear
);
document
.getElementById("prevWeekBtn")
.addEventListener("click", () => weekGenerator.previousWeek());
document
.getElementById("nextWeekBtn")
.addEventListener("click", () => weekGenerator.nextWeek());
It is quite simple. Just save the instance of the GoRouter in which you have defined your routes and use that instance to navigate.
final router = GoRouter();
router.push(location);
I prefer use JetBrains Mono for terminal.
prefer for eyes :)
try Corello, it will simplify the creation and serialization of json>object>json
npm i corello
For whom it might help: I have encountered this error when I tried to group documents by several fields where one of them was a boolean field. After I removed the boolean field from the group _id expression, the error was gone.
I had the same problem. After hours of searching I found the cause which was especially for my windows setting. The 13 LTS TYPO3 core has problems with detecting correct paths on Windows engines (which is case for user500665, too). More in detail it is the resolvement of backslashes. The according core class is typo3/cms-core/Classes/TypoScript/IncludeTree/SysTemplateTreeBuilder.php:237, the Function is handleSetInclude().
The solution would be to add the handling of $path with GeneralUtility::fixWindowsFilePath($path) - which is not implemented by the core team yet. The bug is known - see https://forge.typo3.org/issues/105713.
Turns out I was indeed using the SDK wrong. I was using a custom user object and custom methods to log in and then you manually need to provide the header indeed...
Next time just use the provided methods.
If you installed Docker via brew
, force uninstall and reinstall with these commands will fix it:
brew uninstall --cask docker --force
brew uninstall --formula docker --force
brew install --cask docker
Same problem for me. I have been using this technique for years in Power Apps, but it is no longer working since Win 11.
Note this is a client side OS issue with Windows 11 and nothing to do with configuring SharePoint admin settings.
Has anyone figured out what to do in the Windows OS to solve this?
httpd.apache.org/docs/trunk/mod/mod_rewrite.html#rewriterule: Per-directory Rewrites: * The If blocks follow the rules of the directory context.
My fault, not understanding the docu.
What i suggest is precompute the rectangles and the hotspot.
The rectangles can be calculated when added or moved. The hotspot can be independently calculated from the paint event. One way is when the data is streamed calculated and the draw just reference the calculated spot. You can put it on a different thread that way the calculation is not blocking the drawing.
Other way is to start the hotspot calculation on the beginning of the paint event but put it on a different thread then after the shapes are draw wait for the thread to finish the calculation then draw the hotspot. That way you can merge the shape drawing time whit the hotspot calculation time.
Try using the graphics drawing methods like drawEllips, DrawPolygon, DrawLines. Way faster then calling it by pixel by pixel. (i could be wrong about the drawing pixel by pixel but from the code that what a get)
Try using less function call from the paint event. Each function call can add time (not much but can help some times).
In our case it was fixed by running npx expo install
in the main & android module.
In my case this was an IP address problem in my own set security level for my IIS smtp.
Because I had a whitelist of IPs that were allowed to connect and relay in my company I forgot to set an IP for my VPN tunnel while working from home.
After allowing my Laptops IP in the connection and relay preferences on the IIS SMTP I got rid of the error.
Sure it is possible. The described problems were caused by forgotten mapping from child to parent. Very trivial error, my apologies for wasting others time. The variant with MapsId is correct, only the entities needs to be wired correctly.